Test Report: Docker_Linux_crio_arm64 21978

                    
                      c78c82fa8bc5e05550c6fccb0bebb9cb966c725e:2025-11-24:42489
                    
                

Test fail (59/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.62
44 TestAddons/parallel/Registry 15.86
45 TestAddons/parallel/RegistryCreds 0.56
46 TestAddons/parallel/Ingress 143.81
47 TestAddons/parallel/InspektorGadget 5.28
48 TestAddons/parallel/MetricsServer 5.43
50 TestAddons/parallel/CSI 44.26
51 TestAddons/parallel/Headlamp 3.21
52 TestAddons/parallel/CloudSpanner 6.33
53 TestAddons/parallel/LocalPath 10.47
54 TestAddons/parallel/NvidiaDevicePlugin 6.33
55 TestAddons/parallel/Yakd 6.28
99 TestFunctional/parallel/DashboardCmd 302.57
106 TestFunctional/parallel/ServiceCmdConnect 603.62
108 TestFunctional/parallel/PersistentVolumeClaim 262.86
134 TestFunctional/parallel/ServiceCmd/DeployApp 600.73
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
144 TestFunctional/parallel/ServiceCmd/Format 0.39
145 TestFunctional/parallel/ServiceCmd/URL 0.4
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.91
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 512.2
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.82
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.37
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.43
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.37
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 737.22
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.19
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.08
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.73
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.03
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.58
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.65
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 3.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.16
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.08
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.43
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.23
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.49
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.1
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 124.31
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.07
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.3
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.27
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 1.69
293 TestJSONOutput/pause/Command 1.7
299 TestJSONOutput/unpause/Command 1.83
358 TestKubernetesUpgrade 802.35
384 TestPause/serial/Pause 7.44
440 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7200.071
x
+
TestAddons/serial/Volcano (0.62s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable volcano --alsologtostderr -v=1: exit status 11 (616.659932ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:15:35.837913 1813670 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:15:35.838663 1813670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:35.838672 1813670 out.go:374] Setting ErrFile to fd 2...
	I1124 09:15:35.838678 1813670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:35.839151 1813670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:15:35.839590 1813670 mustload.go:66] Loading cluster: addons-048116
	I1124 09:15:35.840285 1813670 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:35.840359 1813670 addons.go:622] checking whether the cluster is paused
	I1124 09:15:35.841846 1813670 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:35.841925 1813670 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:15:35.842555 1813670 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:15:35.861270 1813670 ssh_runner.go:195] Run: systemctl --version
	I1124 09:15:35.861328 1813670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:15:35.881798 1813670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:15:35.995570 1813670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:15:35.995656 1813670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:15:36.030699 1813670 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:15:36.030718 1813670 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:15:36.030723 1813670 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:15:36.030727 1813670 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:15:36.030730 1813670 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:15:36.030734 1813670 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:15:36.030738 1813670 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:15:36.030741 1813670 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:15:36.030744 1813670 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:15:36.030755 1813670 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:15:36.030759 1813670 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:15:36.030762 1813670 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:15:36.030765 1813670 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:15:36.030768 1813670 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:15:36.030771 1813670 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:15:36.030778 1813670 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:15:36.030785 1813670 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:15:36.030789 1813670 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:15:36.030793 1813670 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:15:36.030796 1813670 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:15:36.030800 1813670 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:15:36.030803 1813670 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:15:36.030808 1813670 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:15:36.030811 1813670 cri.go:89] found id: ""
	I1124 09:15:36.030869 1813670 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:15:36.047670 1813670 out.go:203] 
	W1124 09:15:36.050521 1813670 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:15:36.050555 1813670 out.go:285] * 
	* 
	W1124 09:15:36.361208 1813670 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:15:36.364053 1813670 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.62s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.00044ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003307299s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003546952s
addons_test.go:392: (dbg) Run:  kubectl --context addons-048116 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-048116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-048116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.249924951s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable registry --alsologtostderr -v=1: exit status 11 (345.38035ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:02.271258 1814595 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:02.272285 1814595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:02.272306 1814595 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:02.272313 1814595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:02.272674 1814595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:16:02.273147 1814595 mustload.go:66] Loading cluster: addons-048116
	I1124 09:16:02.273640 1814595 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:02.273664 1814595 addons.go:622] checking whether the cluster is paused
	I1124 09:16:02.273843 1814595 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:02.273865 1814595 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:16:02.274678 1814595 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:16:02.300797 1814595 ssh_runner.go:195] Run: systemctl --version
	I1124 09:16:02.300866 1814595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:16:02.326420 1814595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:16:02.453412 1814595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:16:02.453517 1814595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:16:02.497401 1814595 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:16:02.497427 1814595 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:16:02.497432 1814595 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:16:02.497436 1814595 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:16:02.497439 1814595 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:16:02.497442 1814595 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:16:02.497446 1814595 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:16:02.497448 1814595 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:16:02.497451 1814595 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:16:02.497458 1814595 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:16:02.497461 1814595 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:16:02.497464 1814595 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:16:02.497467 1814595 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:16:02.497470 1814595 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:16:02.497473 1814595 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:16:02.497478 1814595 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:16:02.497481 1814595 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:16:02.497485 1814595 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:16:02.497488 1814595 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:16:02.497490 1814595 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:16:02.497495 1814595 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:16:02.497498 1814595 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:16:02.497501 1814595 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:16:02.497504 1814595 cri.go:89] found id: ""
	I1124 09:16:02.497552 1814595 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:16:02.513985 1814595 out.go:203] 
	W1124 09:16:02.518638 1814595 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:16:02.518672 1814595 out.go:285] * 
	* 
	W1124 09:16:02.528828 1814595 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:16:02.531858 1814595 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.86s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.578197ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-048116
addons_test.go:332: (dbg) Run:  kubectl --context addons-048116 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (282.282096ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:17:02.854978 1816265 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:17:02.855891 1816265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:17:02.855951 1816265 out.go:374] Setting ErrFile to fd 2...
	I1124 09:17:02.855974 1816265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:17:02.856278 1816265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:17:02.856646 1816265 mustload.go:66] Loading cluster: addons-048116
	I1124 09:17:02.857084 1816265 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:17:02.857205 1816265 addons.go:622] checking whether the cluster is paused
	I1124 09:17:02.857354 1816265 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:17:02.857395 1816265 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:17:02.857974 1816265 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:17:02.884426 1816265 ssh_runner.go:195] Run: systemctl --version
	I1124 09:17:02.884494 1816265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:17:02.910534 1816265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:17:03.017121 1816265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:17:03.017213 1816265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:17:03.050901 1816265 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:17:03.050921 1816265 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:17:03.050931 1816265 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:17:03.050935 1816265 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:17:03.050938 1816265 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:17:03.050942 1816265 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:17:03.050946 1816265 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:17:03.050948 1816265 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:17:03.050952 1816265 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:17:03.050958 1816265 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:17:03.050961 1816265 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:17:03.050964 1816265 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:17:03.050967 1816265 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:17:03.050970 1816265 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:17:03.050973 1816265 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:17:03.050978 1816265 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:17:03.050981 1816265 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:17:03.050985 1816265 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:17:03.050988 1816265 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:17:03.050991 1816265 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:17:03.050996 1816265 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:17:03.050998 1816265 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:17:03.051002 1816265 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:17:03.051005 1816265 cri.go:89] found id: ""
	I1124 09:17:03.051060 1816265 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:17:03.067402 1816265 out.go:203] 
	W1124 09:17:03.070456 1816265 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:17:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:17:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:17:03.070492 1816265 out.go:285] * 
	* 
	W1124 09:17:03.081762 1816265 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:17:03.084968 1816265 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-048116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-048116 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-048116 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [640bfd5b-c14f-4adb-bf0e-7520b17e38f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [640bfd5b-c14f-4adb-bf0e-7520b17e38f5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003573042s
I1124 09:16:24.032830 1806704 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.535623829s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-048116 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-048116
helpers_test.go:243: (dbg) docker inspect addons-048116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf",
	        "Created": "2025-11-24T09:13:11.372154566Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1808139,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:13:11.433191639Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/hostname",
	        "HostsPath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/hosts",
	        "LogPath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf-json.log",
	        "Name": "/addons-048116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-048116:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-048116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf",
	                "LowerDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-048116",
	                "Source": "/var/lib/docker/volumes/addons-048116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-048116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-048116",
	                "name.minikube.sigs.k8s.io": "addons-048116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69891f86b33a484b330746aca889be95c2af0e68c69ad3c121376813b41033ba",
	            "SandboxKey": "/var/run/docker/netns/69891f86b33a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34994"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34993"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-048116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:a5:da:58:ae:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9e31c098472caa9ff6321f1cfec21404bcf4e52c75d537222e4edcb53c2fa476",
	                    "EndpointID": "216a0a4df861018ebc3b72b58f5a282887a04ec64a6875951df53b7b9c69c636",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-048116",
	                        "668a21c39000"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-048116 -n addons-048116
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-048116 logs -n 25: (1.531742468s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-417875                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-417875 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ --download-only -p binary-mirror-891208 --alsologtostderr --binary-mirror http://127.0.0.1:41177 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-891208   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ -p binary-mirror-891208                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-891208   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ addons  │ enable dashboard -p addons-048116                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ addons  │ disable dashboard -p addons-048116                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ start   │ -p addons-048116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:15 UTC │
	│ addons  │ addons-048116 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	│ addons  │ addons-048116 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-048116 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	│ addons  │ addons-048116 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	│ addons  │ addons-048116 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	│ ip      │ addons-048116 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │ 24 Nov 25 09:16 UTC │
	│ addons  │ addons-048116 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-048116 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-048116 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ ssh     │ addons-048116 ssh cat /opt/local-path-provisioner/pvc-353fa42e-eb73-4deb-b40a-e91859447994_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │ 24 Nov 25 09:16 UTC │
	│ addons  │ addons-048116 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-048116 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ ssh     │ addons-048116 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-048116 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-048116 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:16 UTC │                     │
	│ addons  │ addons-048116 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-048116                                                                                                                                                                                                                                                                                                                                                                                           │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:17 UTC │ 24 Nov 25 09:17 UTC │
	│ addons  │ addons-048116 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:17 UTC │                     │
	│ ip      │ addons-048116 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:18 UTC │ 24 Nov 25 09:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:12:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:12:46.657258 1807735 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:12:46.657913 1807735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:46.657955 1807735 out.go:374] Setting ErrFile to fd 2...
	I1124 09:12:46.657976 1807735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:46.658275 1807735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:12:46.658786 1807735 out.go:368] Setting JSON to false
	I1124 09:12:46.659614 1807735 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28517,"bootTime":1763947050,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:12:46.659708 1807735 start.go:143] virtualization:  
	I1124 09:12:46.663107 1807735 out.go:179] * [addons-048116] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:12:46.666953 1807735 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:12:46.667116 1807735 notify.go:221] Checking for updates...
	I1124 09:12:46.672616 1807735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:12:46.675463 1807735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:12:46.678413 1807735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:12:46.681215 1807735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:12:46.684188 1807735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:12:46.687280 1807735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:12:46.709960 1807735 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:12:46.710078 1807735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:46.769611 1807735 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 09:12:46.761201799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:46.769720 1807735 docker.go:319] overlay module found
	I1124 09:12:46.772770 1807735 out.go:179] * Using the docker driver based on user configuration
	I1124 09:12:46.775494 1807735 start.go:309] selected driver: docker
	I1124 09:12:46.775513 1807735 start.go:927] validating driver "docker" against <nil>
	I1124 09:12:46.775526 1807735 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:12:46.776243 1807735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:46.826256 1807735 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 09:12:46.817489903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:46.826420 1807735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:12:46.826646 1807735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:12:46.829482 1807735 out.go:179] * Using Docker driver with root privileges
	I1124 09:12:46.832235 1807735 cni.go:84] Creating CNI manager for ""
	I1124 09:12:46.832312 1807735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:12:46.832323 1807735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:12:46.832406 1807735 start.go:353] cluster config:
	{Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 09:12:46.835414 1807735 out.go:179] * Starting "addons-048116" primary control-plane node in "addons-048116" cluster
	I1124 09:12:46.838163 1807735 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:12:46.841016 1807735 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:12:46.843903 1807735 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:12:46.843960 1807735 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1124 09:12:46.843973 1807735 cache.go:65] Caching tarball of preloaded images
	I1124 09:12:46.843973 1807735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:12:46.844057 1807735 preload.go:238] Found /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 09:12:46.844066 1807735 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:12:46.844405 1807735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/config.json ...
	I1124 09:12:46.844436 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/config.json: {Name:mkb1ee1dcbfbe36dfba719c019cb7a81772b6b82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:12:46.859564 1807735 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 09:12:46.859706 1807735 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 09:12:46.859727 1807735 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 09:12:46.859732 1807735 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 09:12:46.859752 1807735 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 09:12:46.859757 1807735 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 09:13:04.911425 1807735 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 09:13:04.911467 1807735 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:13:04.911508 1807735 start.go:360] acquireMachinesLock for addons-048116: {Name:mk1ec72fe76014a8e99e89e320726eb21bf6040a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:13:04.912216 1807735 start.go:364] duration metric: took 682.725µs to acquireMachinesLock for "addons-048116"
	I1124 09:13:04.912253 1807735 start.go:93] Provisioning new machine with config: &{Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:13:04.912341 1807735 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:13:04.915515 1807735 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 09:13:04.915785 1807735 start.go:159] libmachine.API.Create for "addons-048116" (driver="docker")
	I1124 09:13:04.915826 1807735 client.go:173] LocalClient.Create starting
	I1124 09:13:04.915942 1807735 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem
	I1124 09:13:05.052000 1807735 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem
	I1124 09:13:05.463312 1807735 cli_runner.go:164] Run: docker network inspect addons-048116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:13:05.478970 1807735 cli_runner.go:211] docker network inspect addons-048116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:13:05.479049 1807735 network_create.go:284] running [docker network inspect addons-048116] to gather additional debugging logs...
	I1124 09:13:05.479072 1807735 cli_runner.go:164] Run: docker network inspect addons-048116
	W1124 09:13:05.496044 1807735 cli_runner.go:211] docker network inspect addons-048116 returned with exit code 1
	I1124 09:13:05.496075 1807735 network_create.go:287] error running [docker network inspect addons-048116]: docker network inspect addons-048116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-048116 not found
	I1124 09:13:05.496090 1807735 network_create.go:289] output of [docker network inspect addons-048116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-048116 not found
	
	** /stderr **
	I1124 09:13:05.496208 1807735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:13:05.512699 1807735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b8cfe0}
	I1124 09:13:05.512747 1807735 network_create.go:124] attempt to create docker network addons-048116 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 09:13:05.512804 1807735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-048116 addons-048116
	I1124 09:13:05.569985 1807735 network_create.go:108] docker network addons-048116 192.168.49.0/24 created
	I1124 09:13:05.570017 1807735 kic.go:121] calculated static IP "192.168.49.2" for the "addons-048116" container
	I1124 09:13:05.570102 1807735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:13:05.586515 1807735 cli_runner.go:164] Run: docker volume create addons-048116 --label name.minikube.sigs.k8s.io=addons-048116 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:13:05.604736 1807735 oci.go:103] Successfully created a docker volume addons-048116
	I1124 09:13:05.604830 1807735 cli_runner.go:164] Run: docker run --rm --name addons-048116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048116 --entrypoint /usr/bin/test -v addons-048116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:13:07.348387 1807735 cli_runner.go:217] Completed: docker run --rm --name addons-048116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048116 --entrypoint /usr/bin/test -v addons-048116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.743516994s)
	I1124 09:13:07.348417 1807735 oci.go:107] Successfully prepared a docker volume addons-048116
	I1124 09:13:07.348453 1807735 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:13:07.348466 1807735 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:13:07.348543 1807735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-048116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:13:11.305038 1807735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-048116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.956438596s)
	I1124 09:13:11.305070 1807735 kic.go:203] duration metric: took 3.956600453s to extract preloaded images to volume ...
	W1124 09:13:11.305220 1807735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 09:13:11.305336 1807735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:13:11.357247 1807735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-048116 --name addons-048116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-048116 --network addons-048116 --ip 192.168.49.2 --volume addons-048116:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:13:11.643086 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Running}}
	I1124 09:13:11.663570 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:11.692823 1807735 cli_runner.go:164] Run: docker exec addons-048116 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:13:11.742106 1807735 oci.go:144] the created container "addons-048116" has a running status.
	I1124 09:13:11.742133 1807735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa...
	I1124 09:13:12.117303 1807735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:13:12.152641 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:12.174256 1807735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:13:12.174278 1807735 kic_runner.go:114] Args: [docker exec --privileged addons-048116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:13:12.214577 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:12.232242 1807735 machine.go:94] provisionDockerMachine start ...
	I1124 09:13:12.232342 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:12.249860 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:12.250185 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:12.250202 1807735 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:13:12.250846 1807735 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 09:13:15.408514 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-048116
	
	I1124 09:13:15.408539 1807735 ubuntu.go:182] provisioning hostname "addons-048116"
	I1124 09:13:15.408639 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:15.425654 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:15.425981 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:15.425998 1807735 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-048116 && echo "addons-048116" | sudo tee /etc/hostname
	I1124 09:13:15.587415 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-048116
	
	I1124 09:13:15.587502 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:15.607147 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:15.607465 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:15.607488 1807735 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-048116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-048116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-048116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:13:15.761231 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:13:15.761259 1807735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:13:15.761278 1807735 ubuntu.go:190] setting up certificates
	I1124 09:13:15.761288 1807735 provision.go:84] configureAuth start
	I1124 09:13:15.761347 1807735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048116
	I1124 09:13:15.778926 1807735 provision.go:143] copyHostCerts
	I1124 09:13:15.779017 1807735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:13:15.779146 1807735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:13:15.779246 1807735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:13:15.779310 1807735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.addons-048116 san=[127.0.0.1 192.168.49.2 addons-048116 localhost minikube]
	I1124 09:13:16.037024 1807735 provision.go:177] copyRemoteCerts
	I1124 09:13:16.037095 1807735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:13:16.037164 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.055441 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.162039 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:13:16.181691 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 09:13:16.202466 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:13:16.219997 1807735 provision.go:87] duration metric: took 458.686581ms to configureAuth
	I1124 09:13:16.220025 1807735 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:13:16.220260 1807735 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:13:16.220400 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.237257 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:16.237568 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:16.237589 1807735 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:13:16.541204 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:13:16.541226 1807735 machine.go:97] duration metric: took 4.308957959s to provisionDockerMachine
	I1124 09:13:16.541237 1807735 client.go:176] duration metric: took 11.625399564s to LocalClient.Create
	I1124 09:13:16.541251 1807735 start.go:167] duration metric: took 11.625466617s to libmachine.API.Create "addons-048116"
	I1124 09:13:16.541257 1807735 start.go:293] postStartSetup for "addons-048116" (driver="docker")
	I1124 09:13:16.541267 1807735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:13:16.541327 1807735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:13:16.541366 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.559362 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.665598 1807735 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:13:16.669819 1807735 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:13:16.669854 1807735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:13:16.669867 1807735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:13:16.669940 1807735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:13:16.669970 1807735 start.go:296] duration metric: took 128.707414ms for postStartSetup
	I1124 09:13:16.670279 1807735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048116
	I1124 09:13:16.689502 1807735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/config.json ...
	I1124 09:13:16.689806 1807735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:13:16.689855 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.707642 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.810587 1807735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:13:16.816062 1807735 start.go:128] duration metric: took 11.90370535s to createHost
	I1124 09:13:16.816092 1807735 start.go:83] releasing machines lock for "addons-048116", held for 11.903857336s
	I1124 09:13:16.816168 1807735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048116
	I1124 09:13:16.832552 1807735 ssh_runner.go:195] Run: cat /version.json
	I1124 09:13:16.832614 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.832860 1807735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:13:16.832937 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.854729 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.860647 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.956841 1807735 ssh_runner.go:195] Run: systemctl --version
	I1124 09:13:17.046093 1807735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:13:17.081917 1807735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:13:17.086239 1807735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:13:17.086359 1807735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:13:17.114378 1807735 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 09:13:17.114402 1807735 start.go:496] detecting cgroup driver to use...
	I1124 09:13:17.114453 1807735 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:13:17.114523 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:13:17.131853 1807735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:13:17.145733 1807735 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:13:17.145867 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:13:17.164700 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:13:17.185584 1807735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:13:17.305493 1807735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:13:17.418593 1807735 docker.go:234] disabling docker service ...
	I1124 09:13:17.418666 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:13:17.439411 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:13:17.452380 1807735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:13:17.560287 1807735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:13:17.667977 1807735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:13:17.681708 1807735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:13:17.695990 1807735 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.34.2/kubeadm
	I1124 09:13:18.554379 1807735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:13:18.554473 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.563683 1807735 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:13:18.563776 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.572469 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.581422 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.590626 1807735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:13:18.598864 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.607570 1807735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.622166 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.631283 1807735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:13:18.639347 1807735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:13:18.646878 1807735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:13:18.752759 1807735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:13:18.980155 1807735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:13:18.980236 1807735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:13:18.984117 1807735 start.go:564] Will wait 60s for crictl version
	I1124 09:13:18.984187 1807735 ssh_runner.go:195] Run: which crictl
	I1124 09:13:18.987731 1807735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:13:19.016046 1807735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:13:19.016148 1807735 ssh_runner.go:195] Run: crio --version
	I1124 09:13:19.046656 1807735 ssh_runner.go:195] Run: crio --version
	I1124 09:13:19.082631 1807735 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:13:19.085445 1807735 cli_runner.go:164] Run: docker network inspect addons-048116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:13:19.101166 1807735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:13:19.104902 1807735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:13:19.114519 1807735 kubeadm.go:884] updating cluster {Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:13:19.114685 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.271232 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.430122 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.589039 1807735 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:13:19.589224 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.747362 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.896342 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:20.049221 1807735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:13:20.086133 1807735 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:13:20.086162 1807735 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:13:20.086227 1807735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:13:20.116350 1807735 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:13:20.116378 1807735 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:13:20.116386 1807735 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1124 09:13:20.116475 1807735 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-048116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:13:20.116559 1807735 ssh_runner.go:195] Run: crio config
	I1124 09:13:20.175368 1807735 cni.go:84] Creating CNI manager for ""
	I1124 09:13:20.175395 1807735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:13:20.175416 1807735 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:13:20.175440 1807735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-048116 NodeName:addons-048116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:13:20.175564 1807735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-048116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:13:20.175647 1807735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:13:20.183856 1807735 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:13:20.183930 1807735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:13:20.191957 1807735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 09:13:20.204891 1807735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:13:20.218512 1807735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1124 09:13:20.231553 1807735 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:13:20.235158 1807735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:13:20.245396 1807735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:13:20.360427 1807735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:13:20.375968 1807735 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116 for IP: 192.168.49.2
	I1124 09:13:20.376028 1807735 certs.go:195] generating shared ca certs ...
	I1124 09:13:20.376068 1807735 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.376254 1807735 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:13:20.506674 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt ...
	I1124 09:13:20.506709 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt: {Name:mke351c9a834a1abf5bef3fddc5b97fecdd23409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.506929 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key ...
	I1124 09:13:20.506945 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key: {Name:mk47a6e76f1c172854c494905626c98e44c63201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.507036 1807735 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:13:20.812069 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt ...
	I1124 09:13:20.812103 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt: {Name:mk084cb29d8c0c86e5bc36b0f5aa623f8ededce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.812282 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key ...
	I1124 09:13:20.812295 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key: {Name:mk9766ac9b677bcf41313bb9ea6584b7aa8dfeeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.812384 1807735 certs.go:257] generating profile certs ...
	I1124 09:13:20.812449 1807735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.key
	I1124 09:13:20.812469 1807735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt with IP's: []
	I1124 09:13:21.005411 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt ...
	I1124 09:13:21.005444 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: {Name:mk9a8e9f9d4da0bc14abe6aec19e982430311640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.006228 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.key ...
	I1124 09:13:21.006248 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.key: {Name:mkb2f72fc0b6fb30523834f1e8cf66e75b21667e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.006342 1807735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc
	I1124 09:13:21.006365 1807735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 09:13:21.172161 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc ...
	I1124 09:13:21.172193 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc: {Name:mkfdc53537e073db3b47face9699ae62c55b36bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.172384 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc ...
	I1124 09:13:21.172398 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc: {Name:mka204191471870cbddbd84fb9debe3fd0f85aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.172484 1807735 certs.go:382] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt
	I1124 09:13:21.172561 1807735 certs.go:386] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key
	I1124 09:13:21.172618 1807735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key
	I1124 09:13:21.172642 1807735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt with IP's: []
	I1124 09:13:21.324478 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt ...
	I1124 09:13:21.324508 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt: {Name:mkaa2d322590f5a156ffadc7716cb512aa538e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.325342 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key ...
	I1124 09:13:21.325360 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key: {Name:mk5b79921fb62c126653935d453e15257e203c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.325561 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:13:21.325608 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:13:21.325640 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:13:21.325673 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:13:21.326270 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:13:21.344801 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:13:21.363228 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:13:21.381473 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:13:21.399502 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 09:13:21.417426 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:13:21.435623 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:13:21.457046 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:13:21.476782 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:13:21.496823 1807735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:13:21.509765 1807735 ssh_runner.go:195] Run: openssl version
	I1124 09:13:21.515997 1807735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:13:21.524655 1807735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:13:21.528314 1807735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:13:21.528387 1807735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:13:21.569441 1807735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:13:21.577665 1807735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:13:21.581082 1807735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:13:21.581220 1807735 kubeadm.go:401] StartCluster: {Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:13:21.581296 1807735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:13:21.581360 1807735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:13:21.607014 1807735 cri.go:89] found id: ""
	I1124 09:13:21.607129 1807735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:13:21.614776 1807735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:13:21.622480 1807735 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:13:21.622549 1807735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:13:21.630665 1807735 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:13:21.630688 1807735 kubeadm.go:158] found existing configuration files:
	
	I1124 09:13:21.630772 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:13:21.638523 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:13:21.638614 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:13:21.646227 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:13:21.653778 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:13:21.653858 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:13:21.661430 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:13:21.669564 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:13:21.669693 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:13:21.677066 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:13:21.684876 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:13:21.684959 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:13:21.692118 1807735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:13:21.730585 1807735 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:13:21.730650 1807735 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:13:21.754590 1807735 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:13:21.754667 1807735 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:13:21.754707 1807735 kubeadm.go:319] OS: Linux
	I1124 09:13:21.754756 1807735 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:13:21.754808 1807735 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:13:21.754858 1807735 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:13:21.754909 1807735 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:13:21.754960 1807735 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:13:21.755012 1807735 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:13:21.755063 1807735 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:13:21.755115 1807735 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:13:21.755164 1807735 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:13:21.819162 1807735 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:13:21.819350 1807735 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:13:21.819491 1807735 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:13:21.826687 1807735 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:13:21.829841 1807735 out.go:252]   - Generating certificates and keys ...
	I1124 09:13:21.830025 1807735 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:13:21.830148 1807735 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:13:22.752176 1807735 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:13:22.900280 1807735 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:13:23.248351 1807735 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:13:23.777582 1807735 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:13:24.580331 1807735 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:13:24.580708 1807735 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-048116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 09:13:25.732714 1807735 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:13:25.732846 1807735 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-048116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 09:13:26.148455 1807735 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:13:26.290626 1807735 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:13:26.571220 1807735 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:13:26.571533 1807735 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:13:27.132213 1807735 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:13:27.850988 1807735 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:13:28.518201 1807735 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:13:28.630849 1807735 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:13:29.871491 1807735 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:13:29.872536 1807735 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:13:29.877297 1807735 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:13:29.880935 1807735 out.go:252]   - Booting up control plane ...
	I1124 09:13:29.881070 1807735 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:13:29.881209 1807735 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:13:29.881313 1807735 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:13:29.899397 1807735 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:13:29.899832 1807735 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:13:29.907961 1807735 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:13:29.908571 1807735 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:13:29.908803 1807735 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:13:30.083185 1807735 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:13:30.083309 1807735 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:13:32.083460 1807735 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000759502s
	I1124 09:13:32.087005 1807735 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:13:32.087102 1807735 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 09:13:32.087417 1807735 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:13:32.087510 1807735 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:13:35.098252 1807735 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.010848143s
	I1124 09:13:37.095694 1807735 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.008635135s
	I1124 09:13:39.094626 1807735 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.007400025s
	I1124 09:13:39.129569 1807735 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:13:39.149680 1807735 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:13:39.164884 1807735 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:13:39.165092 1807735 kubeadm.go:319] [mark-control-plane] Marking the node addons-048116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:13:39.177876 1807735 kubeadm.go:319] [bootstrap-token] Using token: z52gbi.58pkqb5o55l2h01z
	I1124 09:13:39.180965 1807735 out.go:252]   - Configuring RBAC rules ...
	I1124 09:13:39.181090 1807735 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:13:39.191599 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:13:39.200342 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:13:39.205152 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:13:39.209826 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:13:39.214168 1807735 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:13:39.502539 1807735 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:13:39.931279 1807735 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:13:40.502267 1807735 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:13:40.503600 1807735 kubeadm.go:319] 
	I1124 09:13:40.503678 1807735 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:13:40.503683 1807735 kubeadm.go:319] 
	I1124 09:13:40.503761 1807735 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:13:40.503765 1807735 kubeadm.go:319] 
	I1124 09:13:40.503790 1807735 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:13:40.503849 1807735 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:13:40.503900 1807735 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:13:40.503904 1807735 kubeadm.go:319] 
	I1124 09:13:40.503966 1807735 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:13:40.503975 1807735 kubeadm.go:319] 
	I1124 09:13:40.504023 1807735 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:13:40.504028 1807735 kubeadm.go:319] 
	I1124 09:13:40.504080 1807735 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:13:40.504155 1807735 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:13:40.504224 1807735 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:13:40.504227 1807735 kubeadm.go:319] 
	I1124 09:13:40.504312 1807735 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:13:40.504389 1807735 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:13:40.504393 1807735 kubeadm.go:319] 
	I1124 09:13:40.504477 1807735 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token z52gbi.58pkqb5o55l2h01z \
	I1124 09:13:40.504580 1807735 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5d16c010d48f473ef9a89b08092f440407a6e7096b121b775134bbe2ddebd722 \
	I1124 09:13:40.504600 1807735 kubeadm.go:319] 	--control-plane 
	I1124 09:13:40.504604 1807735 kubeadm.go:319] 
	I1124 09:13:40.504690 1807735 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:13:40.504695 1807735 kubeadm.go:319] 
	I1124 09:13:40.504777 1807735 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token z52gbi.58pkqb5o55l2h01z \
	I1124 09:13:40.504890 1807735 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5d16c010d48f473ef9a89b08092f440407a6e7096b121b775134bbe2ddebd722 
	I1124 09:13:40.509178 1807735 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 09:13:40.509425 1807735 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 09:13:40.509536 1807735 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:13:40.509564 1807735 cni.go:84] Creating CNI manager for ""
	I1124 09:13:40.509578 1807735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:13:40.512790 1807735 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:13:40.515817 1807735 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:13:40.520400 1807735 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:13:40.520425 1807735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:13:40.534148 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:13:40.825232 1807735 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:13:40.825383 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:40.825507 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-048116 minikube.k8s.io/updated_at=2025_11_24T09_13_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=addons-048116 minikube.k8s.io/primary=true
	I1124 09:13:41.007715 1807735 ops.go:34] apiserver oom_adj: -16
	I1124 09:13:41.007917 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:41.508578 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:42.015042 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:42.508190 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:43.009979 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:43.508772 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:44.008463 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:44.508779 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:45.008696 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:45.222173 1807735 kubeadm.go:1114] duration metric: took 4.396844862s to wait for elevateKubeSystemPrivileges
	I1124 09:13:45.222222 1807735 kubeadm.go:403] duration metric: took 23.641000701s to StartCluster
	I1124 09:13:45.222291 1807735 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:45.223296 1807735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:13:45.224231 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:45.224639 1807735 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:13:45.224753 1807735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:13:45.225052 1807735 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:13:45.225090 1807735 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 09:13:45.225604 1807735 addons.go:70] Setting yakd=true in profile "addons-048116"
	I1124 09:13:45.225626 1807735 addons.go:239] Setting addon yakd=true in "addons-048116"
	I1124 09:13:45.225661 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.226392 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.227436 1807735 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-048116"
	I1124 09:13:45.227492 1807735 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-048116"
	I1124 09:13:45.227526 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.228002 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.228609 1807735 addons.go:70] Setting cloud-spanner=true in profile "addons-048116"
	I1124 09:13:45.228643 1807735 addons.go:239] Setting addon cloud-spanner=true in "addons-048116"
	I1124 09:13:45.228684 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.229296 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.231715 1807735 out.go:179] * Verifying Kubernetes components...
	I1124 09:13:45.232105 1807735 addons.go:70] Setting metrics-server=true in profile "addons-048116"
	I1124 09:13:45.232189 1807735 addons.go:239] Setting addon metrics-server=true in "addons-048116"
	I1124 09:13:45.232254 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.234036 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.236525 1807735 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-048116"
	I1124 09:13:45.236571 1807735 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-048116"
	I1124 09:13:45.236613 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.241697 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.247289 1807735 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-048116"
	I1124 09:13:45.247690 1807735 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-048116"
	I1124 09:13:45.253854 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.254476 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.256250 1807735 addons.go:70] Setting registry=true in profile "addons-048116"
	I1124 09:13:45.256345 1807735 addons.go:239] Setting addon registry=true in "addons-048116"
	I1124 09:13:45.256399 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.257423 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.285953 1807735 addons.go:70] Setting default-storageclass=true in profile "addons-048116"
	I1124 09:13:45.286035 1807735 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-048116"
	I1124 09:13:45.286440 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.289318 1807735 addons.go:70] Setting registry-creds=true in profile "addons-048116"
	I1124 09:13:45.289354 1807735 addons.go:239] Setting addon registry-creds=true in "addons-048116"
	I1124 09:13:45.289396 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.289917 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.307772 1807735 addons.go:70] Setting gcp-auth=true in profile "addons-048116"
	I1124 09:13:45.307815 1807735 mustload.go:66] Loading cluster: addons-048116
	I1124 09:13:45.308049 1807735 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:13:45.308336 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.309198 1807735 addons.go:70] Setting storage-provisioner=true in profile "addons-048116"
	I1124 09:13:45.309236 1807735 addons.go:239] Setting addon storage-provisioner=true in "addons-048116"
	I1124 09:13:45.309284 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.309839 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.314495 1807735 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-048116"
	I1124 09:13:45.314545 1807735 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-048116"
	I1124 09:13:45.315780 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.330980 1807735 addons.go:70] Setting ingress=true in profile "addons-048116"
	I1124 09:13:45.331020 1807735 addons.go:239] Setting addon ingress=true in "addons-048116"
	I1124 09:13:45.331081 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.331582 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.336239 1807735 addons.go:70] Setting volcano=true in profile "addons-048116"
	I1124 09:13:45.336308 1807735 addons.go:239] Setting addon volcano=true in "addons-048116"
	I1124 09:13:45.336385 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.337265 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.348755 1807735 addons.go:70] Setting ingress-dns=true in profile "addons-048116"
	I1124 09:13:45.348788 1807735 addons.go:239] Setting addon ingress-dns=true in "addons-048116"
	I1124 09:13:45.348840 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.349407 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.391116 1807735 addons.go:70] Setting volumesnapshots=true in profile "addons-048116"
	I1124 09:13:45.391222 1807735 addons.go:239] Setting addon volumesnapshots=true in "addons-048116"
	I1124 09:13:45.391275 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.391875 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.392220 1807735 addons.go:70] Setting inspektor-gadget=true in profile "addons-048116"
	I1124 09:13:45.392246 1807735 addons.go:239] Setting addon inspektor-gadget=true in "addons-048116"
	I1124 09:13:45.392274 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.392702 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.426986 1807735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:13:45.531491 1807735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:13:45.531726 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.575661 1807735 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 09:13:45.575774 1807735 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 09:13:45.577223 1807735 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:13:45.577243 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:13:45.577323 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.582412 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:13:45.582446 1807735 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:13:45.582519 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.584530 1807735 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 09:13:45.544480 1807735 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 09:13:45.546390 1807735 addons.go:239] Setting addon default-storageclass=true in "addons-048116"
	I1124 09:13:45.584950 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.587810 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.597427 1807735 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 09:13:45.597492 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 09:13:45.597573 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.547300 1807735 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-048116"
	I1124 09:13:45.597964 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.598419 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.633466 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 09:13:45.633491 1807735 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 09:13:45.633557 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.645839 1807735 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 09:13:45.653856 1807735 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 09:13:45.653934 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 09:13:45.654017 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.657289 1807735 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 09:13:45.657310 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 09:13:45.657373 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	W1124 09:13:45.578200 1807735 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 09:13:45.681232 1807735 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 09:13:45.724821 1807735 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 09:13:45.732169 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 09:13:45.735355 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 09:13:45.735464 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 09:13:45.735561 1807735 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 09:13:45.735591 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 09:13:45.735685 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.742584 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 09:13:45.743049 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 09:13:45.743071 1807735 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 09:13:45.743130 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.752521 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 09:13:45.754708 1807735 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 09:13:45.756276 1807735 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 09:13:45.761563 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 09:13:45.756476 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.756514 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.757607 1807735 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 09:13:45.763339 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 09:13:45.763421 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.769435 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 09:13:45.769462 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 09:13:45.769523 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.761385 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 09:13:45.793250 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 09:13:45.797236 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 09:13:45.801238 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 09:13:45.801591 1807735 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:13:45.801606 1807735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:13:45.801665 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.807716 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 09:13:45.810772 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 09:13:45.810797 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 09:13:45.810880 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.823551 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.786861 1807735 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 09:13:45.829228 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 09:13:45.831883 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 09:13:45.831904 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 09:13:45.831975 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.832119 1807735 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 09:13:45.832132 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 09:13:45.832179 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.863494 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.866588 1807735 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 09:13:45.873096 1807735 out.go:179]   - Using image docker.io/busybox:stable
	I1124 09:13:45.873553 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.877016 1807735 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 09:13:45.877043 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 09:13:45.877095 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.877456 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.974752 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.976695 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.992727 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.993638 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.999925 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:46.010722 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:46.014245 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	W1124 09:13:46.015180 1807735 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 09:13:46.015217 1807735 retry.go:31] will retry after 316.285365ms: ssh: handshake failed: EOF
	I1124 09:13:46.033476 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:46.044353 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	W1124 09:13:46.045625 1807735 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 09:13:46.045652 1807735 retry.go:31] will retry after 134.551187ms: ssh: handshake failed: EOF
	I1124 09:13:46.083202 1807735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:13:46.083479 1807735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1124 09:13:46.183581 1807735 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 09:13:46.183663 1807735 retry.go:31] will retry after 495.615285ms: ssh: handshake failed: EOF
	I1124 09:13:46.493675 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:13:46.533715 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 09:13:46.533797 1807735 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 09:13:46.563905 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 09:13:46.565548 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 09:13:46.603801 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 09:13:46.603878 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 09:13:46.610651 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 09:13:46.610669 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 09:13:46.618294 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 09:13:46.618314 1807735 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 09:13:46.669883 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 09:13:46.678386 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 09:13:46.679101 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:13:46.679115 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 09:13:46.688227 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 09:13:46.761539 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 09:13:46.761616 1807735 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 09:13:46.776551 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 09:13:46.792948 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 09:13:46.797733 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 09:13:46.797766 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 09:13:46.807514 1807735 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 09:13:46.807538 1807735 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 09:13:46.813300 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:13:46.813326 1807735 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:13:46.817096 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:13:46.818672 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 09:13:46.895369 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 09:13:46.895395 1807735 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 09:13:46.908479 1807735 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 09:13:46.908558 1807735 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 09:13:46.917634 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 09:13:46.917711 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 09:13:46.975971 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:13:46.976048 1807735 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:13:47.076692 1807735 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 09:13:47.076714 1807735 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 09:13:47.122656 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 09:13:47.122729 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 09:13:47.123687 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 09:13:47.123738 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 09:13:47.139331 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:13:47.270583 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 09:13:47.270656 1807735 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 09:13:47.290751 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 09:13:47.295699 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 09:13:47.295781 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 09:13:47.379355 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 09:13:47.441496 1807735 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.357968223s)
	I1124 09:13:47.441612 1807735 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.358008502s)
	I1124 09:13:47.441754 1807735 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 09:13:47.443126 1807735 node_ready.go:35] waiting up to 6m0s for node "addons-048116" to be "Ready" ...
	I1124 09:13:47.461801 1807735 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 09:13:47.461877 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 09:13:47.544329 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 09:13:47.544402 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 09:13:47.577686 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 09:13:47.577707 1807735 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 09:13:47.650485 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 09:13:47.825039 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 09:13:47.825132 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 09:13:47.947455 1807735 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-048116" context rescaled to 1 replicas
	I1124 09:13:48.073033 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 09:13:48.073117 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 09:13:48.333158 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 09:13:48.333232 1807735 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 09:13:48.603380 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1124 09:13:49.466347 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:49.684959 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.119344475s)
	I1124 09:13:49.685015 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.01505652s)
	I1124 09:13:49.685093 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.191337219s)
	I1124 09:13:49.684927 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.12094727s)
	I1124 09:13:49.893081 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.214661661s)
	I1124 09:13:51.490949 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.802685523s)
	I1124 09:13:51.491035 1807735 addons.go:495] Verifying addon ingress=true in "addons-048116"
	I1124 09:13:51.491364 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.714677823s)
	I1124 09:13:51.491420 1807735 addons.go:495] Verifying addon registry=true in "addons-048116"
	I1124 09:13:51.491546 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.698537262s)
	I1124 09:13:51.491594 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.674410494s)
	I1124 09:13:51.491838 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.673142818s)
	I1124 09:13:51.491911 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.352502678s)
	I1124 09:13:51.491919 1807735 addons.go:495] Verifying addon metrics-server=true in "addons-048116"
	I1124 09:13:51.491964 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.201147872s)
	I1124 09:13:51.492328 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.112900301s)
	I1124 09:13:51.492542 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.8419797s)
	W1124 09:13:51.492576 1807735 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 09:13:51.492597 1807735 retry.go:31] will retry after 186.113889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 09:13:51.496229 1807735 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-048116 service yakd-dashboard -n yakd-dashboard
	
	I1124 09:13:51.496383 1807735 out.go:179] * Verifying ingress addon...
	I1124 09:13:51.496457 1807735 out.go:179] * Verifying registry addon...
	I1124 09:13:51.500190 1807735 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 09:13:51.501211 1807735 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 09:13:51.512854 1807735 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 09:13:51.512876 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 09:13:51.515820 1807735 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1124 09:13:51.613490 1807735 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 09:13:51.613567 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:51.679189 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 09:13:51.882866 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.279362332s)
	I1124 09:13:51.882899 1807735 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-048116"
	I1124 09:13:51.887216 1807735 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 09:13:51.890920 1807735 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 09:13:51.925288 1807735 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 09:13:51.925313 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:51.946376 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:52.023848 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:52.024431 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:52.394392 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:52.504201 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:52.504653 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:52.894308 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:53.006879 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:53.006991 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:53.165702 1807735 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 09:13:53.165904 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:53.185054 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:53.302242 1807735 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 09:13:53.316030 1807735 addons.go:239] Setting addon gcp-auth=true in "addons-048116"
	I1124 09:13:53.316135 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:53.316631 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:53.333885 1807735 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 09:13:53.333942 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:53.350970 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:53.394973 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:53.455991 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 09:13:53.458806 1807735 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 09:13:53.461573 1807735 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 09:13:53.461606 1807735 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 09:13:53.475477 1807735 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 09:13:53.475551 1807735 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 09:13:53.489950 1807735 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 09:13:53.489977 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 09:13:53.505172 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:53.505718 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 09:13:53.506582 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:53.907950 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:53.949433 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:54.028468 1807735 addons.go:495] Verifying addon gcp-auth=true in "addons-048116"
	I1124 09:13:54.030697 1807735 out.go:179] * Verifying gcp-auth addon...
	I1124 09:13:54.034307 1807735 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 09:13:54.035133 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:54.035621 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:54.124610 1807735 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 09:13:54.124641 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:54.394193 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:54.505708 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:54.506079 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:54.537996 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:54.894493 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:55.005810 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:55.026420 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:55.038517 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:55.394647 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:55.503465 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:55.505273 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:55.538059 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:55.894148 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:56.005566 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:56.006431 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:56.037565 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:56.394569 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:56.447129 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:56.503492 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:56.504191 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:56.537780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:56.894413 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:57.004417 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:57.005632 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:57.037646 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:57.394858 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:57.504280 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:57.504685 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:57.537409 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:57.903780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:58.007012 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:58.007835 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:58.038029 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:58.394777 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:58.503565 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:58.504067 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:58.538213 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:58.894555 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:58.946615 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:59.004354 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:59.006086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:59.037859 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:59.394060 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:59.504424 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:59.504553 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:59.537325 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:59.895936 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:00.019415 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:00.026629 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:00.073611 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:00.394184 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:00.503143 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:00.504416 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:00.538851 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:00.894326 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:00.946726 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:01.004708 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:01.004853 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:01.037894 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:01.393871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:01.504232 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:01.504547 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:01.537342 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:01.894862 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:02.005894 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:02.007406 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:02.038103 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:02.394030 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:02.504652 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:02.504801 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:02.537876 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:02.894957 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:02.947109 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:03.009757 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:03.010159 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:03.037867 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:03.393996 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:03.503539 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:03.505844 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:03.537623 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:03.901476 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:04.007435 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:04.007579 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:04.038106 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:04.394354 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:04.505249 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:04.505586 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:04.537432 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:04.894915 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:05.007989 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:05.008278 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:05.037366 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:05.394276 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:05.447909 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:05.504259 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:05.505030 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:05.538111 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:05.901348 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:06.004376 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:06.010116 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:06.038243 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:06.394873 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:06.503642 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:06.504475 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:06.537738 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:06.894812 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:07.005219 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:07.005386 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:07.037532 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:07.393850 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:07.503562 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:07.504889 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:07.537359 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:07.895495 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:07.946289 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:08.006356 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:08.006426 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:08.037639 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:08.393645 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:08.503714 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:08.504959 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:08.537585 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:08.899251 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:09.010467 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:09.010696 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:09.037834 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:09.394148 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:09.504103 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:09.504843 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:09.537660 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:09.894873 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:09.946930 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:10.010378 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:10.010482 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:10.037547 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:10.394311 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:10.503244 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:10.504340 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:10.537205 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:10.894793 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:11.013704 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:11.014058 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:11.047311 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:11.395169 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:11.503503 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:11.504166 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:11.537950 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:11.894780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:12.006077 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:12.008835 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:12.038520 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:12.395078 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:12.447902 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:12.504538 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:12.504902 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:12.537834 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:12.893871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:13.006169 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:13.008010 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:13.037751 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:13.393904 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:13.503835 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:13.505686 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:13.537403 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:13.899695 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:14.007451 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:14.007674 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:14.041287 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:14.393931 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:14.504195 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:14.504331 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:14.537433 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:14.894901 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:14.946996 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:15.024446 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:15.024517 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:15.038803 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:15.393800 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:15.504507 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:15.504651 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:15.537825 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:15.894334 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:16.010238 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:16.011447 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:16.037585 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:16.394966 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:16.504191 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:16.504673 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:16.537632 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:16.894034 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:17.005743 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:17.006353 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:17.038058 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:17.394615 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:17.446768 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:17.503801 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:17.505217 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:17.538349 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:17.895014 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:18.006085 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:18.006620 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:18.037715 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:18.395054 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:18.504259 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:18.505452 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:18.537175 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:18.894755 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:19.004994 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:19.005326 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:19.037343 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:19.394489 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:19.504368 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:19.504795 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:19.537866 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:19.894138 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:19.946945 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:20.006999 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:20.007144 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:20.038289 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:20.394222 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:20.503902 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:20.504484 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:20.537371 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:20.894341 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:21.006612 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:21.006660 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:21.037524 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:21.394643 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:21.503566 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:21.504984 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:21.537908 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:21.894334 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:22.013888 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:22.013966 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:22.038210 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:22.394334 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:22.447021 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:22.505463 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:22.505525 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:22.538206 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:22.894471 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:23.004965 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:23.005292 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:23.038042 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:23.393942 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:23.504745 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:23.505241 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:23.538269 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:23.894398 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:24.007349 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:24.008348 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:24.038139 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:24.393880 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:24.447465 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:24.504663 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:24.504817 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:24.537836 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:24.894012 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:25.005512 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:25.007780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:25.037656 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:25.394086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:25.504060 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:25.504216 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:25.538179 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:25.894394 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:26.018144 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:26.021484 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:26.058840 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:26.446876 1807735 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 09:14:26.446897 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:26.453269 1807735 node_ready.go:49] node "addons-048116" is "Ready"
	I1124 09:14:26.453296 1807735 node_ready.go:38] duration metric: took 39.01001103s for node "addons-048116" to be "Ready" ...
	I1124 09:14:26.453310 1807735 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:14:26.453367 1807735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:14:26.490830 1807735 api_server.go:72] duration metric: took 41.266143681s to wait for apiserver process to appear ...
	I1124 09:14:26.490905 1807735 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:14:26.490940 1807735 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 09:14:26.508191 1807735 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 09:14:26.511071 1807735 api_server.go:141] control plane version: v1.34.2
	I1124 09:14:26.511096 1807735 api_server.go:131] duration metric: took 20.170383ms to wait for apiserver health ...
	I1124 09:14:26.511105 1807735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:14:26.525804 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:26.526009 1807735 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 09:14:26.526060 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:26.532589 1807735 system_pods.go:59] 19 kube-system pods found
	I1124 09:14:26.532675 1807735 system_pods.go:61] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:14:26.532698 1807735 system_pods.go:61] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending
	I1124 09:14:26.532719 1807735 system_pods.go:61] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending
	I1124 09:14:26.532772 1807735 system_pods.go:61] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:26.532799 1807735 system_pods.go:61] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:26.532826 1807735 system_pods.go:61] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:26.532861 1807735 system_pods.go:61] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:26.532884 1807735 system_pods.go:61] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:26.532907 1807735 system_pods.go:61] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:26.532939 1807735 system_pods.go:61] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:26.532965 1807735 system_pods.go:61] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:26.532987 1807735 system_pods.go:61] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending
	I1124 09:14:26.533006 1807735 system_pods.go:61] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending
	I1124 09:14:26.533026 1807735 system_pods.go:61] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending
	I1124 09:14:26.533060 1807735 system_pods.go:61] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending
	I1124 09:14:26.533080 1807735 system_pods.go:61] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending
	I1124 09:14:26.533164 1807735 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending
	I1124 09:14:26.533191 1807735 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending
	I1124 09:14:26.533212 1807735 system_pods.go:61] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:26.533236 1807735 system_pods.go:74] duration metric: took 22.122211ms to wait for pod list to return data ...
	I1124 09:14:26.533270 1807735 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:14:26.550724 1807735 default_sa.go:45] found service account: "default"
	I1124 09:14:26.550956 1807735 default_sa.go:55] duration metric: took 17.663289ms for default service account to be created ...
	I1124 09:14:26.550996 1807735 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:14:26.550914 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:26.565906 1807735 system_pods.go:86] 19 kube-system pods found
	I1124 09:14:26.565999 1807735 system_pods.go:89] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:14:26.566021 1807735 system_pods.go:89] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending
	I1124 09:14:26.566057 1807735 system_pods.go:89] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending
	I1124 09:14:26.566083 1807735 system_pods.go:89] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:26.566119 1807735 system_pods.go:89] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:26.566147 1807735 system_pods.go:89] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:26.566171 1807735 system_pods.go:89] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:26.566194 1807735 system_pods.go:89] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:26.566232 1807735 system_pods.go:89] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:26.566256 1807735 system_pods.go:89] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:26.566283 1807735 system_pods.go:89] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:26.566306 1807735 system_pods.go:89] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending
	I1124 09:14:26.566338 1807735 system_pods.go:89] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending
	I1124 09:14:26.566365 1807735 system_pods.go:89] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending
	I1124 09:14:26.566389 1807735 system_pods.go:89] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending
	I1124 09:14:26.566412 1807735 system_pods.go:89] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending
	I1124 09:14:26.566442 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending
	I1124 09:14:26.566469 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending
	I1124 09:14:26.566493 1807735 system_pods.go:89] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:26.566527 1807735 retry.go:31] will retry after 266.890041ms: missing components: kube-dns
	I1124 09:14:26.867282 1807735 system_pods.go:86] 19 kube-system pods found
	I1124 09:14:26.867368 1807735 system_pods.go:89] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:14:26.867393 1807735 system_pods.go:89] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 09:14:26.867433 1807735 system_pods.go:89] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 09:14:26.867461 1807735 system_pods.go:89] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:26.867485 1807735 system_pods.go:89] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:26.867507 1807735 system_pods.go:89] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:26.867540 1807735 system_pods.go:89] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:26.867564 1807735 system_pods.go:89] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:26.867587 1807735 system_pods.go:89] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:26.867609 1807735 system_pods.go:89] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:26.867641 1807735 system_pods.go:89] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:26.867669 1807735 system_pods.go:89] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:14:26.867694 1807735 system_pods.go:89] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 09:14:26.867721 1807735 system_pods.go:89] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 09:14:26.867754 1807735 system_pods.go:89] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending
	I1124 09:14:26.867784 1807735 system_pods.go:89] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 09:14:26.867809 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending
	I1124 09:14:26.867831 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 09:14:26.867866 1807735 system_pods.go:89] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:26.867901 1807735 retry.go:31] will retry after 326.243688ms: missing components: kube-dns
	I1124 09:14:26.909888 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:27.009659 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:27.010283 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:27.108377 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:27.208981 1807735 system_pods.go:86] 19 kube-system pods found
	I1124 09:14:27.209068 1807735 system_pods.go:89] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Running
	I1124 09:14:27.209094 1807735 system_pods.go:89] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 09:14:27.209143 1807735 system_pods.go:89] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 09:14:27.209174 1807735 system_pods.go:89] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:27.209199 1807735 system_pods.go:89] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:27.209219 1807735 system_pods.go:89] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:27.209251 1807735 system_pods.go:89] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:27.209276 1807735 system_pods.go:89] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:27.209305 1807735 system_pods.go:89] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:27.209330 1807735 system_pods.go:89] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:27.209365 1807735 system_pods.go:89] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:27.209399 1807735 system_pods.go:89] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:14:27.209426 1807735 system_pods.go:89] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 09:14:27.209451 1807735 system_pods.go:89] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 09:14:27.209490 1807735 system_pods.go:89] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 09:14:27.209512 1807735 system_pods.go:89] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 09:14:27.209537 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 09:14:27.209572 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 09:14:27.209601 1807735 system_pods.go:89] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:27.209626 1807735 system_pods.go:126] duration metric: took 658.603739ms to wait for k8s-apps to be running ...
	I1124 09:14:27.209650 1807735 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:14:27.209737 1807735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:14:27.235701 1807735 system_svc.go:56] duration metric: took 26.041787ms WaitForService to wait for kubelet
	I1124 09:14:27.235774 1807735 kubeadm.go:587] duration metric: took 42.011105987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:14:27.235808 1807735 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:14:27.239529 1807735 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 09:14:27.239604 1807735 node_conditions.go:123] node cpu capacity is 2
	I1124 09:14:27.239632 1807735 node_conditions.go:105] duration metric: took 3.802478ms to run NodePressure ...
	I1124 09:14:27.239661 1807735 start.go:242] waiting for startup goroutines ...
	I1124 09:14:27.395592 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:27.505211 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:27.505774 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:27.537964 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:27.895475 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:28.006592 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:28.006793 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:28.037465 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:28.395433 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:28.505685 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:28.507422 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:28.537806 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:28.895026 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:29.022403 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:29.022546 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:29.049607 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:29.394805 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:29.505156 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:29.505343 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:29.537466 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:29.895609 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:30.030992 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:30.042703 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:30.047261 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:30.395258 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:30.505371 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:30.505547 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:30.538011 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:30.900582 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:31.003674 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:31.006402 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:31.037452 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:31.395616 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:31.506363 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:31.506781 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:31.538100 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:31.895648 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:32.006859 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:32.007610 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:32.037879 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:32.394893 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:32.505995 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:32.506390 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:32.537801 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:32.895768 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:33.006510 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:33.006892 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:33.037861 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:33.395492 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:33.503764 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:33.505057 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:33.537995 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:33.894965 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:34.005726 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:34.005957 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:34.038032 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:34.394139 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:34.504593 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:34.505096 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:34.538876 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:34.896003 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:35.006845 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:35.007058 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:35.037822 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:35.394990 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:35.505188 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:35.505276 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:35.537962 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:35.895019 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:36.011398 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:36.019652 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:36.038418 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:36.395207 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:36.505259 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:36.505443 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:36.537679 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:36.895255 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:37.007789 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:37.008286 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:37.037732 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:37.394984 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:37.505343 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:37.506837 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:37.538329 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:37.894652 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:38.010451 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:38.023178 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:38.038575 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:38.395496 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:38.504353 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:38.505350 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:38.537170 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:38.899152 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:39.007126 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:39.007291 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:39.037338 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:39.395579 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:39.505254 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:39.507587 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:39.537914 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:39.894900 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:40.006947 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:40.007822 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:40.071110 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:40.394998 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:40.505240 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:40.505671 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:40.538136 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:40.894665 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:41.006781 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:41.006930 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:41.037890 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:41.394378 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:41.505136 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:41.505350 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:41.538231 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:41.895755 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:42.031940 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:42.032454 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:42.037212 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:42.394651 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:42.505505 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:42.505768 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:42.538071 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:42.894871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:43.006338 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:43.007591 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:43.037908 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:43.394853 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:43.504178 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:43.504534 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:43.537122 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:43.894666 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:44.005897 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:44.006690 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:44.037724 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:44.394856 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:44.506089 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:44.506709 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:44.537766 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:44.894928 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:45.024287 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:45.048280 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:45.049358 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:45.395614 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:45.506218 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:45.506672 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:45.537885 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:45.897287 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:46.008434 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:46.014526 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:46.038133 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:46.395294 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:46.504993 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:46.507680 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:46.539122 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:46.895106 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:47.009009 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:47.009537 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:47.038474 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:47.396841 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:47.505744 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:47.506165 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:47.538571 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:47.900562 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:48.007471 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:48.008048 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:48.038098 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:48.395204 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:48.504498 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:48.505605 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:48.606585 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:48.908048 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:49.010194 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:49.010369 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:49.038424 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:49.404923 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:49.505738 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:49.505871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:49.538586 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:49.896183 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:50.007252 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:50.007562 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:50.042125 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:50.398717 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:50.507456 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:50.508034 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:50.541710 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:50.897685 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:51.007201 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:51.007761 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:51.039449 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:51.395457 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:51.506233 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:51.506430 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:51.546469 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:51.895072 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:52.006224 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:52.007825 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:52.037865 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:52.394505 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:52.510596 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:52.513329 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:52.537904 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:52.906907 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:53.006522 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:53.006722 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:53.041008 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:53.395081 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:53.507656 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:53.508999 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:53.541452 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:53.894886 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:54.014384 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:54.014642 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:54.038100 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:54.394483 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:54.509723 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:54.510267 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:54.615214 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:54.895434 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:55.006121 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:55.006684 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:55.038181 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:55.395329 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:55.505082 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:55.505579 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:55.537166 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:55.894578 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:56.006649 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:56.006923 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:56.038762 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:56.394150 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:56.504773 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:56.505566 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:56.537938 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:56.894998 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:57.006842 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:57.007051 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:57.038303 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:57.395346 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:57.505561 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:57.505668 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:57.537481 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:57.894955 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:58.008366 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:58.008961 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:58.107621 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:58.395584 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:58.506245 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:58.506625 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:58.538630 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:58.906891 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:59.020751 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:59.021566 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:59.049208 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:59.395539 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:59.504752 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:59.504872 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:59.606011 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:59.894839 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:00.069292 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:00.069314 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:00.078297 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:00.400887 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:00.535122 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:00.535283 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:00.539652 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:00.895478 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:01.008404 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:01.008599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:01.038500 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:01.395651 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:01.503986 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:01.504949 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:01.538681 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:01.894844 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:02.011531 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:02.012326 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:02.037409 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:02.396756 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:02.510172 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:02.510350 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:02.538069 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:02.895136 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:03.014425 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:03.014763 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:03.112576 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:03.398984 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:03.504897 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:03.505052 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:03.540409 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:03.895573 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:04.004969 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:04.007372 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:04.038343 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:04.398602 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:04.504710 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:04.504922 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:04.537849 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:04.895916 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:05.008685 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:05.008940 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:05.038296 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:05.395250 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:05.505191 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:05.506580 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:05.542401 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:05.895262 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:06.021859 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:06.021988 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:06.038137 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:06.395294 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:06.505599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:06.506041 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:06.538061 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:06.895958 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:07.010146 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:07.010283 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:07.038223 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:07.395732 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:07.505385 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:07.505885 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:07.538280 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:07.895992 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:08.007428 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:08.008187 1807735 kapi.go:107] duration metric: took 1m16.50697382s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 09:15:08.037369 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:08.395352 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:08.504116 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:08.538494 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:08.897887 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:09.012966 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:09.038306 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:09.395713 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:09.504077 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:09.537822 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:09.894086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:10.005596 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:10.038144 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:10.395321 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:10.504321 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:10.538362 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:10.894923 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:11.012189 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:11.038174 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:11.395379 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:11.503626 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:11.540039 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:11.896905 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:12.004923 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:12.041817 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:12.394526 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:12.505180 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:12.540029 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:12.895255 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:13.004512 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:13.038056 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:13.394965 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:13.503916 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:13.537625 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:13.894987 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:14.005290 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:14.036963 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:14.394745 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:14.503934 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:14.537549 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:14.894687 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:15.016468 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:15.041744 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:15.395611 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:15.503570 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:15.537470 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:15.894629 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:16.005568 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:16.037949 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:16.396673 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:16.505199 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:16.538382 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:16.895289 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:17.054861 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:17.060599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:17.396093 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:17.503512 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:17.537466 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:17.895599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:18.004989 1807735 kapi.go:107] duration metric: took 1m26.504796214s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 09:15:18.038902 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:18.394605 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:18.537455 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:18.895770 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:19.041430 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:19.395577 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:19.538012 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:19.894421 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:20.037724 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:20.394771 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:20.538111 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:20.909217 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:21.037656 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:21.395102 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:21.538236 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:21.895597 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:22.037701 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:22.394786 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:22.538214 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:22.894774 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:23.038061 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:23.395692 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:23.538251 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:23.900298 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:24.038878 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:24.394155 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:24.538267 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:24.895663 1807735 kapi.go:107] duration metric: took 1m33.004742553s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 09:15:25.047926 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:25.538061 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:26.038025 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:26.537555 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:27.038242 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:27.537622 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:28.038184 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:28.537538 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:29.037859 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:29.538379 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:30.045818 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:30.538086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:31.037566 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:31.538052 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:32.045627 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:32.538365 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:33.037596 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:33.538090 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:34.037492 1807735 kapi.go:107] duration metric: took 1m40.003183234s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 09:15:34.041016 1807735 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-048116 cluster.
	I1124 09:15:34.043837 1807735 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 09:15:34.046697 1807735 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 09:15:34.049723 1807735 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1124 09:15:34.053468 1807735 addons.go:530] duration metric: took 1m48.828362513s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner amd-gpu-device-plugin ingress-dns inspektor-gadget registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1124 09:15:34.053543 1807735 start.go:247] waiting for cluster config update ...
	I1124 09:15:34.053566 1807735 start.go:256] writing updated cluster config ...
	I1124 09:15:34.053874 1807735 ssh_runner.go:195] Run: rm -f paused
	I1124 09:15:34.058284 1807735 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:15:34.138880 1807735 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nbktx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.143946 1807735 pod_ready.go:94] pod "coredns-66bc5c9577-nbktx" is "Ready"
	I1124 09:15:34.143980 1807735 pod_ready.go:86] duration metric: took 5.069334ms for pod "coredns-66bc5c9577-nbktx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.146404 1807735 pod_ready.go:83] waiting for pod "etcd-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.150947 1807735 pod_ready.go:94] pod "etcd-addons-048116" is "Ready"
	I1124 09:15:34.150974 1807735 pod_ready.go:86] duration metric: took 4.542361ms for pod "etcd-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.153240 1807735 pod_ready.go:83] waiting for pod "kube-apiserver-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.158240 1807735 pod_ready.go:94] pod "kube-apiserver-addons-048116" is "Ready"
	I1124 09:15:34.158267 1807735 pod_ready.go:86] duration metric: took 5.000016ms for pod "kube-apiserver-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.160916 1807735 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.462334 1807735 pod_ready.go:94] pod "kube-controller-manager-addons-048116" is "Ready"
	I1124 09:15:34.462364 1807735 pod_ready.go:86] duration metric: took 301.423595ms for pod "kube-controller-manager-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.663347 1807735 pod_ready.go:83] waiting for pod "kube-proxy-959tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.063073 1807735 pod_ready.go:94] pod "kube-proxy-959tb" is "Ready"
	I1124 09:15:35.063107 1807735 pod_ready.go:86] duration metric: took 399.681581ms for pod "kube-proxy-959tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.263095 1807735 pod_ready.go:83] waiting for pod "kube-scheduler-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.663065 1807735 pod_ready.go:94] pod "kube-scheduler-addons-048116" is "Ready"
	I1124 09:15:35.663095 1807735 pod_ready.go:86] duration metric: took 399.968583ms for pod "kube-scheduler-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.663112 1807735 pod_ready.go:40] duration metric: took 1.604794652s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:15:35.729393 1807735 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1124 09:15:35.734873 1807735 out.go:179] * Done! kubectl is now configured to use "addons-048116" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:17:40 addons-048116 crio[832]: time="2025-11-24T09:17:40.188491787Z" level=info msg="Removed pod sandbox: 5fd11881f4f970955ff12702400096cd51579db0b932e28baaf85eb5423eb9a1" id=ce65a18e-b799-49f9-92c2-1fa97c170b00 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.069644131Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-llbmk/POD" id=78608d50-bf28-4018-afce-1f7e70ddfeff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.069722376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.080130996Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-llbmk Namespace:default ID:75993815c70e33ab36f654af7999bd6c5d733569e8b9a22e04ddb110e135bd47 UID:617896ed-6f3f-4164-b23a-9aac435897bd NetNS:/var/run/netns/4f6ec747-2921-4395-9a19-f50b5caae120 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40030b83f8}] Aliases:map[]}"
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.080353965Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-llbmk to CNI network \"kindnet\" (type=ptp)"
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.102769986Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-llbmk Namespace:default ID:75993815c70e33ab36f654af7999bd6c5d733569e8b9a22e04ddb110e135bd47 UID:617896ed-6f3f-4164-b23a-9aac435897bd NetNS:/var/run/netns/4f6ec747-2921-4395-9a19-f50b5caae120 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40030b83f8}] Aliases:map[]}"
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.103161769Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-llbmk for CNI network kindnet (type=ptp)"
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.108668229Z" level=info msg="Ran pod sandbox 75993815c70e33ab36f654af7999bd6c5d733569e8b9a22e04ddb110e135bd47 with infra container: default/hello-world-app-5d498dc89-llbmk/POD" id=78608d50-bf28-4018-afce-1f7e70ddfeff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.1106867Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e2f28399-83e6-443d-b387-17f99a6bd943 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.110988817Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e2f28399-83e6-443d-b387-17f99a6bd943 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.111157032Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=e2f28399-83e6-443d-b387-17f99a6bd943 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.115261349Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0496a9ce-be44-4e75-837d-bb400a8e78d3 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:18:34 addons-048116 crio[832]: time="2025-11-24T09:18:34.12078476Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.017593831Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=0496a9ce-be44-4e75-837d-bb400a8e78d3 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.018863Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2f6f49cb-5831-49ee-8089-8b591de4aa96 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.022500512Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a69e1805-8956-43df-a22e-ef33019018db name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.030819398Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-llbmk/hello-world-app" id=e8390174-c848-426e-ba42-1b7ec2c2bd4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.031151677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.044269485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.044642741Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0b1454135cb881d29ebe6e1e7a9063147262b986c969058c1f779ec97d67c88d/merged/etc/passwd: no such file or directory"
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.044748351Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0b1454135cb881d29ebe6e1e7a9063147262b986c969058c1f779ec97d67c88d/merged/etc/group: no such file or directory"
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.045388474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.068248554Z" level=info msg="Created container 49ab02863651ee1d392e2bb90a5e0d30e03016fd5b12be981444f035d792efc2: default/hello-world-app-5d498dc89-llbmk/hello-world-app" id=e8390174-c848-426e-ba42-1b7ec2c2bd4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.071787677Z" level=info msg="Starting container: 49ab02863651ee1d392e2bb90a5e0d30e03016fd5b12be981444f035d792efc2" id=e59732a9-03b6-48d7-91ff-547f47c446fb name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:18:35 addons-048116 crio[832]: time="2025-11-24T09:18:35.077751176Z" level=info msg="Started container" PID=7074 containerID=49ab02863651ee1d392e2bb90a5e0d30e03016fd5b12be981444f035d792efc2 description=default/hello-world-app-5d498dc89-llbmk/hello-world-app id=e59732a9-03b6-48d7-91ff-547f47c446fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=75993815c70e33ab36f654af7999bd6c5d733569e8b9a22e04ddb110e135bd47
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	49ab02863651e       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   75993815c70e3       hello-world-app-5d498dc89-llbmk            default
	b54cdaaae831d       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   664ecad1508a6       nginx                                      default
	666d4a654c782       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   4b0716202b2c4       busybox                                    default
	f6bc8bc475597       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   e00c60b08a529       gcp-auth-78565c9fb4-h5h57                  gcp-auth
	35fb50b5b2713       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	2fd291f337e6c       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	4802d7a3ceb22       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	9f97e26a753dc       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	9d9632d112566       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	06d16a65ffa1d       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   97070190a552f       ingress-nginx-controller-6c8bf45fb-tzf4j   ingress-nginx
	faa3b26b74486       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   42b6d5697660b       gadget-8f498                               gadget
	233b0a07323f2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   21108f5875be0       registry-proxy-2xmpl                       kube-system
	bafca47aae23d       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   f9cb3eb66aed9       ingress-nginx-admission-patch-2rsq7        ingress-nginx
	cc1f77bc48cc1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   283ac6a9ea00b       snapshot-controller-7d9fbc56b8-zn7bf       kube-system
	e4e10950f5aac       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   d80c8279def94       csi-hostpath-resizer-0                     kube-system
	1d60535273929       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   9acb2949b9c71       snapshot-controller-7d9fbc56b8-rsz7j       kube-system
	f3e8c080e1d84       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	2d27bdc7b18db       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   ec8a20bbd6a22       cloud-spanner-emulator-5bdddb765-8jmm9     default
	e00cdeaf5f748       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   f988ec2fb252c       registry-6b586f9694-d2pv7                  kube-system
	45be9a8bfc408       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   28e8853a14afb       ingress-nginx-admission-create-r76dg       ingress-nginx
	12b1fee06478e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   c0faf4f32668c       csi-hostpath-attacher-0                    kube-system
	87c73e079bb84       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   dad3ed0b11a4f       metrics-server-85b7d694d7-4fg4f            kube-system
	9718a4629047a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   cd4d80ee8ffe2       kube-ingress-dns-minikube                  kube-system
	36318f85d4174       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   f69231bf61cb1       nvidia-device-plugin-daemonset-z6qjb       kube-system
	e1a7fe70441c7       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   e8d70ee6e2f33       yakd-dashboard-5ff678cb9-ltw2s             yakd-dashboard
	dc586c45d37fe       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   bf18c7acef752       local-path-provisioner-648f6765c9-c7876    local-path-storage
	2600acc92a3f2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   5a2013cd27c71       coredns-66bc5c9577-nbktx                   kube-system
	9c09d13919482       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   cb55648838854       storage-provisioner                        kube-system
	b4982ecbf9cf9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   cba656917424c       kindnet-qrx7h                              kube-system
	94b8a43bc5c3d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   fce8406cb1cf2       kube-proxy-959tb                           kube-system
	540926b2e76ba       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   dffe4227a29c6       etcd-addons-048116                         kube-system
	49296fa79d5b5       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   14ed20b195f2a       kube-apiserver-addons-048116               kube-system
	239c1c8193a19       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   79382b436a25b       kube-scheduler-addons-048116               kube-system
	864930e920257       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   c28aefd67c75d       kube-controller-manager-addons-048116      kube-system
	
	
	==> coredns [2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c] <==
	[INFO] 10.244.0.18:50240 - 37930 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002189797s
	[INFO] 10.244.0.18:50240 - 23194 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00010186s
	[INFO] 10.244.0.18:50240 - 43217 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000071631s
	[INFO] 10.244.0.18:52377 - 12997 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150615s
	[INFO] 10.244.0.18:52377 - 12759 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000211006s
	[INFO] 10.244.0.18:39027 - 12835 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118213s
	[INFO] 10.244.0.18:39027 - 12613 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000190395s
	[INFO] 10.244.0.18:54109 - 5303 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122267s
	[INFO] 10.244.0.18:54109 - 5114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078713s
	[INFO] 10.244.0.18:38969 - 10071 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00143297s
	[INFO] 10.244.0.18:38969 - 9866 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001669634s
	[INFO] 10.244.0.18:40172 - 33299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141762s
	[INFO] 10.244.0.18:40172 - 33728 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127928s
	[INFO] 10.244.0.21:52327 - 7848 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000280266s
	[INFO] 10.244.0.21:47864 - 32568 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00018918s
	[INFO] 10.244.0.21:37460 - 59174 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139342s
	[INFO] 10.244.0.21:55836 - 52001 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118639s
	[INFO] 10.244.0.21:53604 - 941 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109499s
	[INFO] 10.244.0.21:35540 - 21907 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087566s
	[INFO] 10.244.0.21:39408 - 17577 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001951s
	[INFO] 10.244.0.21:46981 - 39459 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001711218s
	[INFO] 10.244.0.21:44654 - 46068 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000645612s
	[INFO] 10.244.0.21:51497 - 43640 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000867654s
	[INFO] 10.244.0.23:34965 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000227113s
	[INFO] 10.244.0.23:54400 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000170611s
	
	
	==> describe nodes <==
	Name:               addons-048116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-048116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=addons-048116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_13_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-048116
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-048116"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:13:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-048116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:18:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:16:54 +0000   Mon, 24 Nov 2025 09:13:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:16:54 +0000   Mon, 24 Nov 2025 09:13:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:16:54 +0000   Mon, 24 Nov 2025 09:13:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:16:54 +0000   Mon, 24 Nov 2025 09:14:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-048116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                ba29121a-9e25-4d48-89e2-ae8f0202b3f3
	  Boot ID:                    27a92f9c-55a4-4798-92be-317cdb891088
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-5bdddb765-8jmm9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  default                     hello-world-app-5d498dc89-llbmk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-8f498                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  gcp-auth                    gcp-auth-78565c9fb4-h5h57                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-tzf4j    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m44s
	  kube-system                 coredns-66bc5c9577-nbktx                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m50s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 csi-hostpathplugin-7cjv4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 etcd-addons-048116                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m56s
	  kube-system                 kindnet-qrx7h                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m51s
	  kube-system                 kube-apiserver-addons-048116                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-controller-manager-addons-048116       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-proxy-959tb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-scheduler-addons-048116                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 metrics-server-85b7d694d7-4fg4f             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m45s
	  kube-system                 nvidia-device-plugin-daemonset-z6qjb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 registry-6b586f9694-d2pv7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 registry-creds-764b6fb674-9dvm5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 registry-proxy-2xmpl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-rsz7j        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-zn7bf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  local-path-storage          local-path-provisioner-648f6765c9-c7876     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ltw2s              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m49s                kube-proxy       
	  Warning  CgroupV1                 5m4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s (x8 over 5m4s)  kubelet          Node addons-048116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s (x8 over 5m4s)  kubelet          Node addons-048116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s (x8 over 5m4s)  kubelet          Node addons-048116 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m55s                kubelet          Node addons-048116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s                kubelet          Node addons-048116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m55s                kubelet          Node addons-048116 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m51s                node-controller  Node addons-048116 event: Registered Node addons-048116 in Controller
	  Normal   NodeReady                4m10s                kubelet          Node addons-048116 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527] <==
	{"level":"warn","ts":"2025-11-24T09:13:35.938239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:35.951936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:35.971584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:35.986199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.003632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.020545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.038779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.056120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.081099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.093286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.117551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.126579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.142696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.159009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.174627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.208038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.224278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.238372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.304690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:52.092836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:52.107279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.017440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.039547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.061266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.077535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42706","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f6bc8bc4755979c1458661e14329600d6a8a859bf82d7f805c666a0337460b9f] <==
	2025/11/24 09:15:33 GCP Auth Webhook started!
	2025/11/24 09:15:36 Ready to marshal response ...
	2025/11/24 09:15:36 Ready to write response ...
	2025/11/24 09:15:36 Ready to marshal response ...
	2025/11/24 09:15:36 Ready to write response ...
	2025/11/24 09:15:36 Ready to marshal response ...
	2025/11/24 09:15:36 Ready to write response ...
	2025/11/24 09:15:57 Ready to marshal response ...
	2025/11/24 09:15:57 Ready to write response ...
	2025/11/24 09:16:02 Ready to marshal response ...
	2025/11/24 09:16:02 Ready to write response ...
	2025/11/24 09:16:02 Ready to marshal response ...
	2025/11/24 09:16:02 Ready to write response ...
	2025/11/24 09:16:12 Ready to marshal response ...
	2025/11/24 09:16:12 Ready to write response ...
	2025/11/24 09:16:14 Ready to marshal response ...
	2025/11/24 09:16:14 Ready to write response ...
	2025/11/24 09:16:30 Ready to marshal response ...
	2025/11/24 09:16:30 Ready to write response ...
	2025/11/24 09:16:48 Ready to marshal response ...
	2025/11/24 09:16:48 Ready to write response ...
	2025/11/24 09:18:33 Ready to marshal response ...
	2025/11/24 09:18:33 Ready to write response ...
	
	
	==> kernel <==
	 09:18:36 up  8:01,  0 user,  load average: 0.57, 2.02, 2.89
	Linux addons-048116 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8] <==
	I1124 09:16:35.718448       1 main.go:301] handling current node
	I1124 09:16:45.717827       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:16:45.717977       1 main.go:301] handling current node
	I1124 09:16:55.718465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:16:55.718504       1 main.go:301] handling current node
	I1124 09:17:05.721250       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:17:05.721293       1 main.go:301] handling current node
	I1124 09:17:15.722988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:17:15.723021       1 main.go:301] handling current node
	I1124 09:17:25.718411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:17:25.718441       1 main.go:301] handling current node
	I1124 09:17:35.722905       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:17:35.722941       1 main.go:301] handling current node
	I1124 09:17:45.721200       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:17:45.721315       1 main.go:301] handling current node
	I1124 09:17:55.720450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:17:55.720557       1 main.go:301] handling current node
	I1124 09:18:05.723383       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:18:05.723418       1 main.go:301] handling current node
	I1124 09:18:15.721196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:18:15.721231       1 main.go:301] handling current node
	I1124 09:18:25.718343       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:18:25.718378       1 main.go:301] handling current node
	I1124 09:18:35.718130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:18:35.718162       1 main.go:301] handling current node
	
	
	==> kube-apiserver [49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d] <==
	W1124 09:14:26.130514       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.187.255:443: connect: connection refused
	E1124 09:14:26.130557       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.187.255:443: connect: connection refused" logger="UnhandledError"
	W1124 09:14:50.921211       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 09:14:50.921257       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1124 09:14:50.921271       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1124 09:14:50.925165       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 09:14:50.925239       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1124 09:14:50.925257       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1124 09:14:51.483669       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.66.75:443: connect: connection refused" logger="UnhandledError"
	W1124 09:14:51.483762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 09:14:51.483826       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 09:14:51.485493       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.66.75:443: connect: connection refused" logger="UnhandledError"
	I1124 09:14:51.614913       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 09:15:46.277264       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34230: use of closed network connection
	I1124 09:16:14.634349       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 09:16:15.016852       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.254.44"}
	I1124 09:16:40.525824       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1124 09:16:56.269250       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1124 09:18:33.989320       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.95.48"}
	
	
	==> kube-controller-manager [864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84] <==
	I1124 09:13:44.041459       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:13:44.041586       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:13:44.041633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 09:13:44.041660       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 09:13:44.041712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 09:13:44.041746       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 09:13:44.041800       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:13:44.042210       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:13:44.042356       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:13:44.042453       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 09:13:44.046665       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:13:44.054314       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-048116" podCIDRs=["10.244.0.0/24"]
	I1124 09:13:44.055311       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 09:13:44.061266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1124 09:13:50.016160       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1124 09:14:14.009813       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 09:14:14.009999       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 09:14:14.010045       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 09:14:14.030718       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 09:14:14.035319       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 09:14:14.111064       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:14:14.135765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:14:29.057325       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1124 09:14:44.116548       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 09:14:44.142848       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed] <==
	I1124 09:13:45.468133       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:13:45.647109       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:13:45.798538       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:13:45.818267       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 09:13:45.818372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:13:46.130471       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:13:46.130528       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:13:46.142107       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:13:46.142425       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:13:46.142439       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:13:46.148529       1 config.go:200] "Starting service config controller"
	I1124 09:13:46.148549       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:13:46.148565       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:13:46.148569       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:13:46.148606       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:13:46.148610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:13:46.155955       1 config.go:309] "Starting node config controller"
	I1124 09:13:46.155992       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:13:46.156001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:13:46.248713       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:13:46.248769       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:13:46.248986       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c] <==
	E1124 09:13:37.107104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:13:37.107213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:13:37.107317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:13:37.107416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:13:37.109317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:13:37.109545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:13:37.109673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:13:37.109768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:13:37.109881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:13:37.109938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:13:37.921405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:13:37.964542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:13:37.974671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:13:38.021871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:13:38.054434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 09:13:38.090418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:13:38.118183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:13:38.148811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:13:38.172605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:13:38.249399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:13:38.345141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:13:38.359698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:13:38.389079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:13:38.403071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1124 09:13:41.074059       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:16:49 addons-048116 kubelet[1273]: W1124 09:16:49.135130    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/crio-0fc42b1ddb4e08bb6bb7a816d9248f7d418e6ce1d05d8f42974a8fb6c542951e WatchSource:0}: Error finding container 0fc42b1ddb4e08bb6bb7a816d9248f7d418e6ce1d05d8f42974a8fb6c542951e: Status 404 returned error can't find the container with id 0fc42b1ddb4e08bb6bb7a816d9248f7d418e6ce1d05d8f42974a8fb6c542951e
	Nov 24 09:16:50 addons-048116 kubelet[1273]: I1124 09:16:50.221775    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.873508535 podStartE2EDuration="2.221755384s" podCreationTimestamp="2025-11-24 09:16:48 +0000 UTC" firstStartedPulling="2025-11-24 09:16:49.13742611 +0000 UTC m=+189.391725903" lastFinishedPulling="2025-11-24 09:16:49.485672959 +0000 UTC m=+189.739972752" observedRunningTime="2025-11-24 09:16:50.22136726 +0000 UTC m=+190.475667061" watchObservedRunningTime="2025-11-24 09:16:50.221755384 +0000 UTC m=+190.476055185"
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.215845    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3-gcp-creds\") pod \"b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3\" (UID: \"b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3\") "
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.216002    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4dc0cfa4-c916-11f0-8993-d6dcc2a5f159\") pod \"b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3\" (UID: \"b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3\") "
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.216070    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmmtj\" (UniqueName: \"kubernetes.io/projected/b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3-kube-api-access-zmmtj\") pod \"b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3\" (UID: \"b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3\") "
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.216442    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3" (UID: "b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.220646    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^4dc0cfa4-c916-11f0-8993-d6dcc2a5f159" (OuterVolumeSpecName: "task-pv-storage") pod "b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3" (UID: "b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3"). InnerVolumeSpecName "pvc-4d1202f2-0ad3-4d4b-bc19-d46624d1f0b1". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.221658    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3-kube-api-access-zmmtj" (OuterVolumeSpecName: "kube-api-access-zmmtj") pod "b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3" (UID: "b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3"). InnerVolumeSpecName "kube-api-access-zmmtj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.231297    1273 scope.go:117] "RemoveContainer" containerID="175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005"
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.244548    1273 scope.go:117] "RemoveContainer" containerID="175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005"
	Nov 24 09:16:56 addons-048116 kubelet[1273]: E1124 09:16:56.251244    1273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005\": container with ID starting with 175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005 not found: ID does not exist" containerID="175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005"
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.251313    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005"} err="failed to get container status \"175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005\": rpc error: code = NotFound desc = could not find container \"175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005\": container with ID starting with 175f9997934aefed98ebb769a886e814777d25f7a9083244095875113e8a9005 not found: ID does not exist"
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.316573    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zmmtj\" (UniqueName: \"kubernetes.io/projected/b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3-kube-api-access-zmmtj\") on node \"addons-048116\" DevicePath \"\""
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.316772    1273 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3-gcp-creds\") on node \"addons-048116\" DevicePath \"\""
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.316860    1273 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-4d1202f2-0ad3-4d4b-bc19-d46624d1f0b1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4dc0cfa4-c916-11f0-8993-d6dcc2a5f159\") on node \"addons-048116\" "
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.322217    1273 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-4d1202f2-0ad3-4d4b-bc19-d46624d1f0b1" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^4dc0cfa4-c916-11f0-8993-d6dcc2a5f159") on node "addons-048116"
	Nov 24 09:16:56 addons-048116 kubelet[1273]: I1124 09:16:56.417753    1273 reconciler_common.go:299] "Volume detached for volume \"pvc-4d1202f2-0ad3-4d4b-bc19-d46624d1f0b1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4dc0cfa4-c916-11f0-8993-d6dcc2a5f159\") on node \"addons-048116\" DevicePath \"\""
	Nov 24 09:16:57 addons-048116 kubelet[1273]: I1124 09:16:57.891072    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3" path="/var/lib/kubelet/pods/b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3/volumes"
	Nov 24 09:17:28 addons-048116 kubelet[1273]: I1124 09:17:28.887930    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-z6qjb" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 09:17:28 addons-048116 kubelet[1273]: I1124 09:17:28.888038    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-d2pv7" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 09:17:56 addons-048116 kubelet[1273]: I1124 09:17:56.888500    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2xmpl" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 09:18:33 addons-048116 kubelet[1273]: I1124 09:18:33.834629    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95b9z\" (UniqueName: \"kubernetes.io/projected/617896ed-6f3f-4164-b23a-9aac435897bd-kube-api-access-95b9z\") pod \"hello-world-app-5d498dc89-llbmk\" (UID: \"617896ed-6f3f-4164-b23a-9aac435897bd\") " pod="default/hello-world-app-5d498dc89-llbmk"
	Nov 24 09:18:33 addons-048116 kubelet[1273]: I1124 09:18:33.835199    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/617896ed-6f3f-4164-b23a-9aac435897bd-gcp-creds\") pod \"hello-world-app-5d498dc89-llbmk\" (UID: \"617896ed-6f3f-4164-b23a-9aac435897bd\") " pod="default/hello-world-app-5d498dc89-llbmk"
	Nov 24 09:18:33 addons-048116 kubelet[1273]: I1124 09:18:33.888278    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-d2pv7" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 09:18:34 addons-048116 kubelet[1273]: W1124 09:18:34.105874    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/crio-75993815c70e33ab36f654af7999bd6c5d733569e8b9a22e04ddb110e135bd47 WatchSource:0}: Error finding container 75993815c70e33ab36f654af7999bd6c5d733569e8b9a22e04ddb110e135bd47: Status 404 returned error can't find the container with id 75993815c70e33ab36f654af7999bd6c5d733569e8b9a22e04ddb110e135bd47
	
	
	==> storage-provisioner [9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6] <==
	W1124 09:18:12.092247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:14.095200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:14.099700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:16.102846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:16.107710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:18.111678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:18.118237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:20.122553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:20.127152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:22.129820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:22.134624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:24.138523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:24.145639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:26.149327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:26.154289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:28.158030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:28.162643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:30.167969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:30.178059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:32.180654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:32.185236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:34.189179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:34.194107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:36.196696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:18:36.201335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-048116 -n addons-048116
helpers_test.go:269: (dbg) Run:  kubectl --context addons-048116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-048116 describe pod ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-048116 describe pod ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5: exit status 1 (151.942631ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r76dg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2rsq7" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-9dvm5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-048116 describe pod ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (309.125634ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:18:37.532015 1817338 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:18:37.533010 1817338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:18:37.533063 1817338 out.go:374] Setting ErrFile to fd 2...
	I1124 09:18:37.533097 1817338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:18:37.533475 1817338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:18:37.533836 1817338 mustload.go:66] Loading cluster: addons-048116
	I1124 09:18:37.534313 1817338 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:18:37.534357 1817338 addons.go:622] checking whether the cluster is paused
	I1124 09:18:37.534508 1817338 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:18:37.534541 1817338 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:18:37.535104 1817338 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:18:37.558065 1817338 ssh_runner.go:195] Run: systemctl --version
	I1124 09:18:37.558120 1817338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:18:37.580374 1817338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:18:37.692601 1817338 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:18:37.692710 1817338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:18:37.733886 1817338 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:18:37.733924 1817338 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:18:37.733930 1817338 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:18:37.733934 1817338 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:18:37.733945 1817338 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:18:37.733949 1817338 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:18:37.733952 1817338 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:18:37.733956 1817338 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:18:37.733959 1817338 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:18:37.733965 1817338 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:18:37.733969 1817338 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:18:37.733972 1817338 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:18:37.733976 1817338 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:18:37.733979 1817338 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:18:37.733982 1817338 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:18:37.733987 1817338 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:18:37.734000 1817338 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:18:37.734004 1817338 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:18:37.734007 1817338 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:18:37.734011 1817338 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:18:37.734016 1817338 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:18:37.734029 1817338 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:18:37.734032 1817338 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:18:37.734035 1817338 cri.go:89] found id: ""
	I1124 09:18:37.734105 1817338 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:18:37.752132 1817338 out.go:203] 
	W1124 09:18:37.755312 1817338 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:18:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:18:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:18:37.755339 1817338 out.go:285] * 
	* 
	W1124 09:18:37.765708 1817338 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:18:37.768778 1817338 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable ingress --alsologtostderr -v=1: exit status 11 (338.377124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:18:37.847437 1817392 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:18:37.848220 1817392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:18:37.848237 1817392 out.go:374] Setting ErrFile to fd 2...
	I1124 09:18:37.848243 1817392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:18:37.848632 1817392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:18:37.848983 1817392 mustload.go:66] Loading cluster: addons-048116
	I1124 09:18:37.849474 1817392 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:18:37.849495 1817392 addons.go:622] checking whether the cluster is paused
	I1124 09:18:37.849658 1817392 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:18:37.849691 1817392 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:18:37.850339 1817392 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:18:37.874787 1817392 ssh_runner.go:195] Run: systemctl --version
	I1124 09:18:37.874845 1817392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:18:37.905614 1817392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:18:38.017339 1817392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:18:38.017461 1817392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:18:38.070867 1817392 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:18:38.070936 1817392 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:18:38.070942 1817392 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:18:38.070946 1817392 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:18:38.070950 1817392 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:18:38.070953 1817392 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:18:38.070957 1817392 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:18:38.070961 1817392 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:18:38.070964 1817392 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:18:38.070986 1817392 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:18:38.070995 1817392 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:18:38.070998 1817392 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:18:38.071001 1817392 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:18:38.071004 1817392 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:18:38.071007 1817392 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:18:38.071015 1817392 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:18:38.071075 1817392 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:18:38.071080 1817392 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:18:38.071084 1817392 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:18:38.071087 1817392 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:18:38.071093 1817392 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:18:38.071099 1817392 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:18:38.071102 1817392 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:18:38.071105 1817392 cri.go:89] found id: ""
	I1124 09:18:38.071208 1817392 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:18:38.089462 1817392 out.go:203] 
	W1124 09:18:38.092551 1817392 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:18:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:18:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:18:38.092589 1817392 out.go:285] * 
	* 
	W1124 09:18:38.103339 1817392 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:18:38.106509 1817392 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8f498" [2ebdfc51-565f-4860-b797-f5fa70db76a6] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003757234s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (279.944752ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:17:02.296285 1816213 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:17:02.297162 1816213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:17:02.297230 1816213 out.go:374] Setting ErrFile to fd 2...
	I1124 09:17:02.297253 1816213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:17:02.297902 1816213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:17:02.298370 1816213 mustload.go:66] Loading cluster: addons-048116
	I1124 09:17:02.300180 1816213 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:17:02.300238 1816213 addons.go:622] checking whether the cluster is paused
	I1124 09:17:02.300403 1816213 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:17:02.300445 1816213 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:17:02.301061 1816213 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:17:02.321155 1816213 ssh_runner.go:195] Run: systemctl --version
	I1124 09:17:02.321222 1816213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:17:02.341035 1816213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:17:02.452177 1816213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:17:02.452273 1816213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:17:02.485204 1816213 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:17:02.485226 1816213 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:17:02.485230 1816213 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:17:02.485234 1816213 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:17:02.485238 1816213 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:17:02.485241 1816213 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:17:02.485245 1816213 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:17:02.485248 1816213 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:17:02.485251 1816213 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:17:02.485256 1816213 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:17:02.485260 1816213 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:17:02.485263 1816213 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:17:02.485266 1816213 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:17:02.485269 1816213 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:17:02.485272 1816213 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:17:02.485278 1816213 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:17:02.485285 1816213 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:17:02.485290 1816213 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:17:02.485293 1816213 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:17:02.485296 1816213 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:17:02.485301 1816213 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:17:02.485306 1816213 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:17:02.485310 1816213 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:17:02.485314 1816213 cri.go:89] found id: ""
	I1124 09:17:02.485371 1816213 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:17:02.501458 1816213 out.go:203] 
	W1124 09:17:02.504413 1816213 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:17:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:17:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:17:02.504442 1816213 out.go:285] * 
	* 
	W1124 09:17:02.515309 1816213 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:17:02.518220 1816213 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.561319ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003488726s
addons_test.go:463: (dbg) Run:  kubectl --context addons-048116 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (329.80165ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:14.036004 1815071 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:14.044561 1815071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:14.044636 1815071 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:14.044660 1815071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:14.045146 1815071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:16:14.046195 1815071 mustload.go:66] Loading cluster: addons-048116
	I1124 09:16:14.046691 1815071 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:14.046716 1815071 addons.go:622] checking whether the cluster is paused
	I1124 09:16:14.046887 1815071 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:14.046903 1815071 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:16:14.047453 1815071 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:16:14.069460 1815071 ssh_runner.go:195] Run: systemctl --version
	I1124 09:16:14.069525 1815071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:16:14.088382 1815071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:16:14.199769 1815071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:16:14.199870 1815071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:16:14.238347 1815071 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:16:14.238370 1815071 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:16:14.238374 1815071 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:16:14.238378 1815071 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:16:14.238382 1815071 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:16:14.238386 1815071 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:16:14.238390 1815071 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:16:14.238393 1815071 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:16:14.238396 1815071 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:16:14.238401 1815071 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:16:14.238405 1815071 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:16:14.238408 1815071 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:16:14.238411 1815071 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:16:14.238414 1815071 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:16:14.238417 1815071 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:16:14.238423 1815071 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:16:14.238432 1815071 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:16:14.238437 1815071 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:16:14.238440 1815071 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:16:14.238443 1815071 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:16:14.238451 1815071 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:16:14.238459 1815071 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:16:14.238462 1815071 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:16:14.238465 1815071 cri.go:89] found id: ""
	I1124 09:16:14.238516 1815071 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:16:14.259057 1815071 out.go:203] 
	W1124 09:16:14.262413 1815071 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:16:14.262445 1815071 out.go:285] * 
	* 
	W1124 09:16:14.283043 1815071 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:16:14.286426 1815071 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.43s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 09:16:12.980282 1806704 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 09:16:12.985654 1806704 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 09:16:12.985689 1806704 kapi.go:107] duration metric: took 5.41918ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.431718ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-048116 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-048116 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e6b16f47-17df-4a26-8a55-e165586f722c] Pending
helpers_test.go:352: "task-pv-pod" [e6b16f47-17df-4a26-8a55-e165586f722c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [e6b16f47-17df-4a26-8a55-e165586f722c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004039352s
addons_test.go:572: (dbg) Run:  kubectl --context addons-048116 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-048116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-048116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-048116 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-048116 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-048116 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-048116 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3] Pending
helpers_test.go:352: "task-pv-pod-restore" [b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b9bfdd3e-9f92-46ec-bbab-efa2599cb1f3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004030236s
addons_test.go:614: (dbg) Run:  kubectl --context addons-048116 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-048116 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-048116 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (287.3401ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:56.738353 1816106 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:56.739990 1816106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:56.740080 1816106 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:56.740117 1816106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:56.740543 1816106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:16:56.741046 1816106 mustload.go:66] Loading cluster: addons-048116
	I1124 09:16:56.741598 1816106 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:56.741658 1816106 addons.go:622] checking whether the cluster is paused
	I1124 09:16:56.741832 1816106 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:56.741890 1816106 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:16:56.742654 1816106 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:16:56.759997 1816106 ssh_runner.go:195] Run: systemctl --version
	I1124 09:16:56.760055 1816106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:16:56.787641 1816106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:16:56.892237 1816106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:16:56.892327 1816106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:16:56.926216 1816106 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:16:56.926255 1816106 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:16:56.926261 1816106 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:16:56.926284 1816106 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:16:56.926293 1816106 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:16:56.926297 1816106 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:16:56.926301 1816106 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:16:56.926304 1816106 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:16:56.926307 1816106 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:16:56.926314 1816106 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:16:56.926321 1816106 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:16:56.926325 1816106 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:16:56.926328 1816106 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:16:56.926331 1816106 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:16:56.926334 1816106 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:16:56.926339 1816106 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:16:56.926365 1816106 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:16:56.926370 1816106 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:16:56.926374 1816106 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:16:56.926376 1816106 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:16:56.926382 1816106 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:16:56.926388 1816106 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:16:56.926391 1816106 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:16:56.926394 1816106 cri.go:89] found id: ""
	I1124 09:16:56.926461 1816106 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:16:56.942070 1816106 out.go:203] 
	W1124 09:16:56.945012 1816106 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:16:56.945096 1816106 out.go:285] * 
	* 
	W1124 09:16:56.956812 1816106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:16:56.959896 1816106 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (275.59314ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:57.019245 1816148 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:57.020047 1816148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:57.020067 1816148 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:57.020074 1816148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:57.020502 1816148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:16:57.020916 1816148 mustload.go:66] Loading cluster: addons-048116
	I1124 09:16:57.021432 1816148 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:57.021457 1816148 addons.go:622] checking whether the cluster is paused
	I1124 09:16:57.021644 1816148 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:57.021665 1816148 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:16:57.022242 1816148 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:16:57.040547 1816148 ssh_runner.go:195] Run: systemctl --version
	I1124 09:16:57.040685 1816148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:16:57.059107 1816148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:16:57.167813 1816148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:16:57.167969 1816148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:16:57.201043 1816148 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:16:57.201074 1816148 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:16:57.201080 1816148 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:16:57.201085 1816148 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:16:57.201088 1816148 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:16:57.201094 1816148 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:16:57.201097 1816148 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:16:57.201134 1816148 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:16:57.201138 1816148 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:16:57.201145 1816148 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:16:57.201148 1816148 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:16:57.201151 1816148 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:16:57.201155 1816148 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:16:57.201158 1816148 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:16:57.201161 1816148 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:16:57.201170 1816148 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:16:57.201177 1816148 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:16:57.201182 1816148 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:16:57.201185 1816148 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:16:57.201189 1816148 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:16:57.201197 1816148 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:16:57.201201 1816148 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:16:57.201204 1816148 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:16:57.201215 1816148 cri.go:89] found id: ""
	I1124 09:16:57.201269 1816148 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:16:57.218293 1816148 out.go:203] 
	W1124 09:16:57.221637 1816148 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:16:57.221667 1816148 out.go:285] * 
	* 
	W1124 09:16:57.232246 1816148 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:16:57.235169 1816148 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-048116 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-048116 --alsologtostderr -v=1: exit status 11 (282.367852ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:15:46.746326 1813888 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:15:46.747255 1813888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:46.747276 1813888 out.go:374] Setting ErrFile to fd 2...
	I1124 09:15:46.747283 1813888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:46.747604 1813888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:15:46.747946 1813888 mustload.go:66] Loading cluster: addons-048116
	I1124 09:15:46.748369 1813888 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:46.748395 1813888 addons.go:622] checking whether the cluster is paused
	I1124 09:15:46.748543 1813888 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:46.748562 1813888 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:15:46.749302 1813888 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:15:46.768049 1813888 ssh_runner.go:195] Run: systemctl --version
	I1124 09:15:46.768122 1813888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:15:46.790553 1813888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:15:46.900023 1813888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:15:46.900148 1813888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:15:46.928615 1813888 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:15:46.928638 1813888 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:15:46.928649 1813888 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:15:46.928653 1813888 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:15:46.928657 1813888 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:15:46.928661 1813888 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:15:46.928664 1813888 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:15:46.928668 1813888 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:15:46.928671 1813888 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:15:46.928676 1813888 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:15:46.928679 1813888 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:15:46.928683 1813888 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:15:46.928719 1813888 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:15:46.928729 1813888 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:15:46.928733 1813888 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:15:46.928739 1813888 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:15:46.928742 1813888 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:15:46.928746 1813888 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:15:46.928749 1813888 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:15:46.928752 1813888 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:15:46.928768 1813888 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:15:46.928776 1813888 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:15:46.928779 1813888 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:15:46.928782 1813888 cri.go:89] found id: ""
	I1124 09:15:46.928835 1813888 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:15:46.944813 1813888 out.go:203] 
	W1124 09:15:46.947707 1813888 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:15:46.947735 1813888 out.go:285] * 
	* 
	W1124 09:15:46.957905 1813888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:15:46.960873 1813888 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-048116 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-048116
helpers_test.go:243: (dbg) docker inspect addons-048116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf",
	        "Created": "2025-11-24T09:13:11.372154566Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1808139,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:13:11.433191639Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/hostname",
	        "HostsPath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/hosts",
	        "LogPath": "/var/lib/docker/containers/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf-json.log",
	        "Name": "/addons-048116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-048116:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-048116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf",
	                "LowerDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac000a479b2c8fd0f13400b2cef36dc5b4bf7b41ec210f67b0d6463557561c9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-048116",
	                "Source": "/var/lib/docker/volumes/addons-048116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-048116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-048116",
	                "name.minikube.sigs.k8s.io": "addons-048116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69891f86b33a484b330746aca889be95c2af0e68c69ad3c121376813b41033ba",
	            "SandboxKey": "/var/run/docker/netns/69891f86b33a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34994"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34993"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-048116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:a5:da:58:ae:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9e31c098472caa9ff6321f1cfec21404bcf4e52c75d537222e4edcb53c2fa476",
	                    "EndpointID": "216a0a4df861018ebc3b72b58f5a282887a04ec64a6875951df53b7b9c69c636",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-048116",
	                        "668a21c39000"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-048116 -n addons-048116
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-048116 logs -n 25: (1.473909577s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-698929 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-698929   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-698929                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-698929   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-432573 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-432573   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-432573                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-432573   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-533969 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-533969   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-533969                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-533969   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-698929                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-698929   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-432573                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-432573   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-533969                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-533969   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ --download-only -p download-docker-417875 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-417875 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ -p download-docker-417875                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-417875 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ --download-only -p binary-mirror-891208 --alsologtostderr --binary-mirror http://127.0.0.1:41177 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-891208   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ -p binary-mirror-891208                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-891208   │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ addons  │ enable dashboard -p addons-048116                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ addons  │ disable dashboard -p addons-048116                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ start   │ -p addons-048116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:15 UTC │
	│ addons  │ addons-048116 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	│ addons  │ addons-048116 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-048116 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-048116          │ jenkins │ v1.37.0 │ 24 Nov 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:12:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:12:46.657258 1807735 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:12:46.657913 1807735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:46.657955 1807735 out.go:374] Setting ErrFile to fd 2...
	I1124 09:12:46.657976 1807735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:46.658275 1807735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:12:46.658786 1807735 out.go:368] Setting JSON to false
	I1124 09:12:46.659614 1807735 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28517,"bootTime":1763947050,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:12:46.659708 1807735 start.go:143] virtualization:  
	I1124 09:12:46.663107 1807735 out.go:179] * [addons-048116] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:12:46.666953 1807735 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:12:46.667116 1807735 notify.go:221] Checking for updates...
	I1124 09:12:46.672616 1807735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:12:46.675463 1807735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:12:46.678413 1807735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:12:46.681215 1807735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:12:46.684188 1807735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:12:46.687280 1807735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:12:46.709960 1807735 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:12:46.710078 1807735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:46.769611 1807735 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 09:12:46.761201799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:46.769720 1807735 docker.go:319] overlay module found
	I1124 09:12:46.772770 1807735 out.go:179] * Using the docker driver based on user configuration
	I1124 09:12:46.775494 1807735 start.go:309] selected driver: docker
	I1124 09:12:46.775513 1807735 start.go:927] validating driver "docker" against <nil>
	I1124 09:12:46.775526 1807735 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:12:46.776243 1807735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:46.826256 1807735 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 09:12:46.817489903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:46.826420 1807735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:12:46.826646 1807735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:12:46.829482 1807735 out.go:179] * Using Docker driver with root privileges
	I1124 09:12:46.832235 1807735 cni.go:84] Creating CNI manager for ""
	I1124 09:12:46.832312 1807735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:12:46.832323 1807735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:12:46.832406 1807735 start.go:353] cluster config:
	{Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 09:12:46.835414 1807735 out.go:179] * Starting "addons-048116" primary control-plane node in "addons-048116" cluster
	I1124 09:12:46.838163 1807735 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:12:46.841016 1807735 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:12:46.843903 1807735 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:12:46.843960 1807735 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1124 09:12:46.843973 1807735 cache.go:65] Caching tarball of preloaded images
	I1124 09:12:46.843973 1807735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:12:46.844057 1807735 preload.go:238] Found /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 09:12:46.844066 1807735 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:12:46.844405 1807735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/config.json ...
	I1124 09:12:46.844436 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/config.json: {Name:mkb1ee1dcbfbe36dfba719c019cb7a81772b6b82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:12:46.859564 1807735 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 09:12:46.859706 1807735 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 09:12:46.859727 1807735 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 09:12:46.859732 1807735 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 09:12:46.859752 1807735 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 09:12:46.859757 1807735 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 09:13:04.911425 1807735 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 09:13:04.911467 1807735 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:13:04.911508 1807735 start.go:360] acquireMachinesLock for addons-048116: {Name:mk1ec72fe76014a8e99e89e320726eb21bf6040a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:13:04.912216 1807735 start.go:364] duration metric: took 682.725µs to acquireMachinesLock for "addons-048116"
	I1124 09:13:04.912253 1807735 start.go:93] Provisioning new machine with config: &{Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:13:04.912341 1807735 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:13:04.915515 1807735 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 09:13:04.915785 1807735 start.go:159] libmachine.API.Create for "addons-048116" (driver="docker")
	I1124 09:13:04.915826 1807735 client.go:173] LocalClient.Create starting
	I1124 09:13:04.915942 1807735 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem
	I1124 09:13:05.052000 1807735 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem
	I1124 09:13:05.463312 1807735 cli_runner.go:164] Run: docker network inspect addons-048116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:13:05.478970 1807735 cli_runner.go:211] docker network inspect addons-048116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:13:05.479049 1807735 network_create.go:284] running [docker network inspect addons-048116] to gather additional debugging logs...
	I1124 09:13:05.479072 1807735 cli_runner.go:164] Run: docker network inspect addons-048116
	W1124 09:13:05.496044 1807735 cli_runner.go:211] docker network inspect addons-048116 returned with exit code 1
	I1124 09:13:05.496075 1807735 network_create.go:287] error running [docker network inspect addons-048116]: docker network inspect addons-048116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-048116 not found
	I1124 09:13:05.496090 1807735 network_create.go:289] output of [docker network inspect addons-048116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-048116 not found
	
	** /stderr **
	I1124 09:13:05.496208 1807735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:13:05.512699 1807735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b8cfe0}
	I1124 09:13:05.512747 1807735 network_create.go:124] attempt to create docker network addons-048116 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 09:13:05.512804 1807735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-048116 addons-048116
	I1124 09:13:05.569985 1807735 network_create.go:108] docker network addons-048116 192.168.49.0/24 created
	I1124 09:13:05.570017 1807735 kic.go:121] calculated static IP "192.168.49.2" for the "addons-048116" container
	I1124 09:13:05.570102 1807735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:13:05.586515 1807735 cli_runner.go:164] Run: docker volume create addons-048116 --label name.minikube.sigs.k8s.io=addons-048116 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:13:05.604736 1807735 oci.go:103] Successfully created a docker volume addons-048116
	I1124 09:13:05.604830 1807735 cli_runner.go:164] Run: docker run --rm --name addons-048116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048116 --entrypoint /usr/bin/test -v addons-048116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:13:07.348387 1807735 cli_runner.go:217] Completed: docker run --rm --name addons-048116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048116 --entrypoint /usr/bin/test -v addons-048116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.743516994s)
	I1124 09:13:07.348417 1807735 oci.go:107] Successfully prepared a docker volume addons-048116
	I1124 09:13:07.348453 1807735 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:13:07.348466 1807735 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:13:07.348543 1807735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-048116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:13:11.305038 1807735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-048116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.956438596s)
	I1124 09:13:11.305070 1807735 kic.go:203] duration metric: took 3.956600453s to extract preloaded images to volume ...
	W1124 09:13:11.305220 1807735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 09:13:11.305336 1807735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:13:11.357247 1807735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-048116 --name addons-048116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-048116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-048116 --network addons-048116 --ip 192.168.49.2 --volume addons-048116:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:13:11.643086 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Running}}
	I1124 09:13:11.663570 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:11.692823 1807735 cli_runner.go:164] Run: docker exec addons-048116 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:13:11.742106 1807735 oci.go:144] the created container "addons-048116" has a running status.
	I1124 09:13:11.742133 1807735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa...
	I1124 09:13:12.117303 1807735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:13:12.152641 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:12.174256 1807735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:13:12.174278 1807735 kic_runner.go:114] Args: [docker exec --privileged addons-048116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:13:12.214577 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:12.232242 1807735 machine.go:94] provisionDockerMachine start ...
	I1124 09:13:12.232342 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:12.249860 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:12.250185 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:12.250202 1807735 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:13:12.250846 1807735 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 09:13:15.408514 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-048116
	
	I1124 09:13:15.408539 1807735 ubuntu.go:182] provisioning hostname "addons-048116"
	I1124 09:13:15.408639 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:15.425654 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:15.425981 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:15.425998 1807735 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-048116 && echo "addons-048116" | sudo tee /etc/hostname
	I1124 09:13:15.587415 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-048116
	
	I1124 09:13:15.587502 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:15.607147 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:15.607465 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:15.607488 1807735 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-048116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-048116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-048116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:13:15.761231 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:13:15.761259 1807735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:13:15.761278 1807735 ubuntu.go:190] setting up certificates
	I1124 09:13:15.761288 1807735 provision.go:84] configureAuth start
	I1124 09:13:15.761347 1807735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048116
	I1124 09:13:15.778926 1807735 provision.go:143] copyHostCerts
	I1124 09:13:15.779017 1807735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:13:15.779146 1807735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:13:15.779246 1807735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:13:15.779310 1807735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.addons-048116 san=[127.0.0.1 192.168.49.2 addons-048116 localhost minikube]
	I1124 09:13:16.037024 1807735 provision.go:177] copyRemoteCerts
	I1124 09:13:16.037095 1807735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:13:16.037164 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.055441 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.162039 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:13:16.181691 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 09:13:16.202466 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:13:16.219997 1807735 provision.go:87] duration metric: took 458.686581ms to configureAuth
	I1124 09:13:16.220025 1807735 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:13:16.220260 1807735 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:13:16.220400 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.237257 1807735 main.go:143] libmachine: Using SSH client type: native
	I1124 09:13:16.237568 1807735 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34990 <nil> <nil>}
	I1124 09:13:16.237589 1807735 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:13:16.541204 1807735 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:13:16.541226 1807735 machine.go:97] duration metric: took 4.308957959s to provisionDockerMachine
	I1124 09:13:16.541237 1807735 client.go:176] duration metric: took 11.625399564s to LocalClient.Create
	I1124 09:13:16.541251 1807735 start.go:167] duration metric: took 11.625466617s to libmachine.API.Create "addons-048116"
	I1124 09:13:16.541257 1807735 start.go:293] postStartSetup for "addons-048116" (driver="docker")
	I1124 09:13:16.541267 1807735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:13:16.541327 1807735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:13:16.541366 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.559362 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.665598 1807735 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:13:16.669819 1807735 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:13:16.669854 1807735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:13:16.669867 1807735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:13:16.669940 1807735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:13:16.669970 1807735 start.go:296] duration metric: took 128.707414ms for postStartSetup
	I1124 09:13:16.670279 1807735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048116
	I1124 09:13:16.689502 1807735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/config.json ...
	I1124 09:13:16.689806 1807735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:13:16.689855 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.707642 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.810587 1807735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:13:16.816062 1807735 start.go:128] duration metric: took 11.90370535s to createHost
	I1124 09:13:16.816092 1807735 start.go:83] releasing machines lock for "addons-048116", held for 11.903857336s
	I1124 09:13:16.816168 1807735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-048116
	I1124 09:13:16.832552 1807735 ssh_runner.go:195] Run: cat /version.json
	I1124 09:13:16.832614 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.832860 1807735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:13:16.832937 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:16.854729 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.860647 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:16.956841 1807735 ssh_runner.go:195] Run: systemctl --version
	I1124 09:13:17.046093 1807735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:13:17.081917 1807735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:13:17.086239 1807735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:13:17.086359 1807735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:13:17.114378 1807735 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 09:13:17.114402 1807735 start.go:496] detecting cgroup driver to use...
	I1124 09:13:17.114453 1807735 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:13:17.114523 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:13:17.131853 1807735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:13:17.145733 1807735 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:13:17.145867 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:13:17.164700 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:13:17.185584 1807735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:13:17.305493 1807735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:13:17.418593 1807735 docker.go:234] disabling docker service ...
	I1124 09:13:17.418666 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:13:17.439411 1807735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:13:17.452380 1807735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:13:17.560287 1807735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:13:17.667977 1807735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:13:17.681708 1807735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:13:17.695990 1807735 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.34.2/kubeadm
	I1124 09:13:18.554379 1807735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:13:18.554473 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.563683 1807735 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:13:18.563776 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.572469 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.581422 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.590626 1807735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:13:18.598864 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.607570 1807735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.622166 1807735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:13:18.631283 1807735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:13:18.639347 1807735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:13:18.646878 1807735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:13:18.752759 1807735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:13:18.980155 1807735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:13:18.980236 1807735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:13:18.984117 1807735 start.go:564] Will wait 60s for crictl version
	I1124 09:13:18.984187 1807735 ssh_runner.go:195] Run: which crictl
	I1124 09:13:18.987731 1807735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:13:19.016046 1807735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:13:19.016148 1807735 ssh_runner.go:195] Run: crio --version
	I1124 09:13:19.046656 1807735 ssh_runner.go:195] Run: crio --version
	I1124 09:13:19.082631 1807735 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:13:19.085445 1807735 cli_runner.go:164] Run: docker network inspect addons-048116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:13:19.101166 1807735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:13:19.104902 1807735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:13:19.114519 1807735 kubeadm.go:884] updating cluster {Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:13:19.114685 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.271232 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.430122 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.589039 1807735 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:13:19.589224 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.747362 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:19.896342 1807735 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:13:20.049221 1807735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:13:20.086133 1807735 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:13:20.086162 1807735 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:13:20.086227 1807735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:13:20.116350 1807735 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:13:20.116378 1807735 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:13:20.116386 1807735 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1124 09:13:20.116475 1807735 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-048116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:13:20.116559 1807735 ssh_runner.go:195] Run: crio config
	I1124 09:13:20.175368 1807735 cni.go:84] Creating CNI manager for ""
	I1124 09:13:20.175395 1807735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:13:20.175416 1807735 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:13:20.175440 1807735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-048116 NodeName:addons-048116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:13:20.175564 1807735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-048116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:13:20.175647 1807735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:13:20.183856 1807735 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:13:20.183930 1807735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:13:20.191957 1807735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 09:13:20.204891 1807735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:13:20.218512 1807735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1124 09:13:20.231553 1807735 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:13:20.235158 1807735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:13:20.245396 1807735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:13:20.360427 1807735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:13:20.375968 1807735 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116 for IP: 192.168.49.2
	I1124 09:13:20.376028 1807735 certs.go:195] generating shared ca certs ...
	I1124 09:13:20.376068 1807735 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.376254 1807735 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:13:20.506674 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt ...
	I1124 09:13:20.506709 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt: {Name:mke351c9a834a1abf5bef3fddc5b97fecdd23409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.506929 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key ...
	I1124 09:13:20.506945 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key: {Name:mk47a6e76f1c172854c494905626c98e44c63201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.507036 1807735 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:13:20.812069 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt ...
	I1124 09:13:20.812103 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt: {Name:mk084cb29d8c0c86e5bc36b0f5aa623f8ededce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.812282 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key ...
	I1124 09:13:20.812295 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key: {Name:mk9766ac9b677bcf41313bb9ea6584b7aa8dfeeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:20.812384 1807735 certs.go:257] generating profile certs ...
	I1124 09:13:20.812449 1807735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.key
	I1124 09:13:20.812469 1807735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt with IP's: []
	I1124 09:13:21.005411 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt ...
	I1124 09:13:21.005444 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: {Name:mk9a8e9f9d4da0bc14abe6aec19e982430311640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.006228 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.key ...
	I1124 09:13:21.006248 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.key: {Name:mkb2f72fc0b6fb30523834f1e8cf66e75b21667e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.006342 1807735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc
	I1124 09:13:21.006365 1807735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 09:13:21.172161 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc ...
	I1124 09:13:21.172193 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc: {Name:mkfdc53537e073db3b47face9699ae62c55b36bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.172384 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc ...
	I1124 09:13:21.172398 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc: {Name:mka204191471870cbddbd84fb9debe3fd0f85aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.172484 1807735 certs.go:382] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt.d220d5bc -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt
	I1124 09:13:21.172561 1807735 certs.go:386] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key.d220d5bc -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key
	I1124 09:13:21.172618 1807735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key
	I1124 09:13:21.172642 1807735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt with IP's: []
	I1124 09:13:21.324478 1807735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt ...
	I1124 09:13:21.324508 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt: {Name:mkaa2d322590f5a156ffadc7716cb512aa538e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.325342 1807735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key ...
	I1124 09:13:21.325360 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key: {Name:mk5b79921fb62c126653935d453e15257e203c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:21.325561 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:13:21.325608 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:13:21.325640 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:13:21.325673 1807735 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:13:21.326270 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:13:21.344801 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:13:21.363228 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:13:21.381473 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:13:21.399502 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 09:13:21.417426 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:13:21.435623 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:13:21.457046 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:13:21.476782 1807735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:13:21.496823 1807735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:13:21.509765 1807735 ssh_runner.go:195] Run: openssl version
	I1124 09:13:21.515997 1807735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:13:21.524655 1807735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:13:21.528314 1807735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:13:21.528387 1807735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:13:21.569441 1807735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:13:21.577665 1807735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:13:21.581082 1807735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:13:21.581220 1807735 kubeadm.go:401] StartCluster: {Name:addons-048116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-048116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:13:21.581296 1807735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:13:21.581360 1807735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:13:21.607014 1807735 cri.go:89] found id: ""
	I1124 09:13:21.607129 1807735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:13:21.614776 1807735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:13:21.622480 1807735 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:13:21.622549 1807735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:13:21.630665 1807735 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:13:21.630688 1807735 kubeadm.go:158] found existing configuration files:
	
	I1124 09:13:21.630772 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:13:21.638523 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:13:21.638614 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:13:21.646227 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:13:21.653778 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:13:21.653858 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:13:21.661430 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:13:21.669564 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:13:21.669693 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:13:21.677066 1807735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:13:21.684876 1807735 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:13:21.684959 1807735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:13:21.692118 1807735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:13:21.730585 1807735 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:13:21.730650 1807735 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:13:21.754590 1807735 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:13:21.754667 1807735 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:13:21.754707 1807735 kubeadm.go:319] OS: Linux
	I1124 09:13:21.754756 1807735 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:13:21.754808 1807735 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:13:21.754858 1807735 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:13:21.754909 1807735 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:13:21.754960 1807735 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:13:21.755012 1807735 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:13:21.755063 1807735 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:13:21.755115 1807735 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:13:21.755164 1807735 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:13:21.819162 1807735 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:13:21.819350 1807735 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:13:21.819491 1807735 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:13:21.826687 1807735 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:13:21.829841 1807735 out.go:252]   - Generating certificates and keys ...
	I1124 09:13:21.830025 1807735 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:13:21.830148 1807735 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:13:22.752176 1807735 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:13:22.900280 1807735 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:13:23.248351 1807735 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:13:23.777582 1807735 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:13:24.580331 1807735 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:13:24.580708 1807735 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-048116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 09:13:25.732714 1807735 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:13:25.732846 1807735 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-048116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 09:13:26.148455 1807735 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:13:26.290626 1807735 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:13:26.571220 1807735 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:13:26.571533 1807735 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:13:27.132213 1807735 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:13:27.850988 1807735 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:13:28.518201 1807735 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:13:28.630849 1807735 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:13:29.871491 1807735 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:13:29.872536 1807735 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:13:29.877297 1807735 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:13:29.880935 1807735 out.go:252]   - Booting up control plane ...
	I1124 09:13:29.881070 1807735 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:13:29.881209 1807735 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:13:29.881313 1807735 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:13:29.899397 1807735 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:13:29.899832 1807735 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:13:29.907961 1807735 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:13:29.908571 1807735 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:13:29.908803 1807735 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:13:30.083185 1807735 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:13:30.083309 1807735 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:13:32.083460 1807735 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000759502s
	I1124 09:13:32.087005 1807735 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:13:32.087102 1807735 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 09:13:32.087417 1807735 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:13:32.087510 1807735 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:13:35.098252 1807735 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.010848143s
	I1124 09:13:37.095694 1807735 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.008635135s
	I1124 09:13:39.094626 1807735 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.007400025s
	I1124 09:13:39.129569 1807735 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:13:39.149680 1807735 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:13:39.164884 1807735 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:13:39.165092 1807735 kubeadm.go:319] [mark-control-plane] Marking the node addons-048116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:13:39.177876 1807735 kubeadm.go:319] [bootstrap-token] Using token: z52gbi.58pkqb5o55l2h01z
	I1124 09:13:39.180965 1807735 out.go:252]   - Configuring RBAC rules ...
	I1124 09:13:39.181090 1807735 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:13:39.191599 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:13:39.200342 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:13:39.205152 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:13:39.209826 1807735 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:13:39.214168 1807735 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:13:39.502539 1807735 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:13:39.931279 1807735 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:13:40.502267 1807735 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:13:40.503600 1807735 kubeadm.go:319] 
	I1124 09:13:40.503678 1807735 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:13:40.503683 1807735 kubeadm.go:319] 
	I1124 09:13:40.503761 1807735 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:13:40.503765 1807735 kubeadm.go:319] 
	I1124 09:13:40.503790 1807735 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:13:40.503849 1807735 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:13:40.503900 1807735 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:13:40.503904 1807735 kubeadm.go:319] 
	I1124 09:13:40.503966 1807735 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:13:40.503975 1807735 kubeadm.go:319] 
	I1124 09:13:40.504023 1807735 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:13:40.504028 1807735 kubeadm.go:319] 
	I1124 09:13:40.504080 1807735 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:13:40.504155 1807735 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:13:40.504224 1807735 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:13:40.504227 1807735 kubeadm.go:319] 
	I1124 09:13:40.504312 1807735 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:13:40.504389 1807735 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:13:40.504393 1807735 kubeadm.go:319] 
	I1124 09:13:40.504477 1807735 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token z52gbi.58pkqb5o55l2h01z \
	I1124 09:13:40.504580 1807735 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5d16c010d48f473ef9a89b08092f440407a6e7096b121b775134bbe2ddebd722 \
	I1124 09:13:40.504600 1807735 kubeadm.go:319] 	--control-plane 
	I1124 09:13:40.504604 1807735 kubeadm.go:319] 
	I1124 09:13:40.504690 1807735 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:13:40.504695 1807735 kubeadm.go:319] 
	I1124 09:13:40.504777 1807735 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token z52gbi.58pkqb5o55l2h01z \
	I1124 09:13:40.504890 1807735 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5d16c010d48f473ef9a89b08092f440407a6e7096b121b775134bbe2ddebd722 
	I1124 09:13:40.509178 1807735 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 09:13:40.509425 1807735 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 09:13:40.509536 1807735 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:13:40.509564 1807735 cni.go:84] Creating CNI manager for ""
	I1124 09:13:40.509578 1807735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:13:40.512790 1807735 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:13:40.515817 1807735 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:13:40.520400 1807735 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:13:40.520425 1807735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:13:40.534148 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:13:40.825232 1807735 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:13:40.825383 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:40.825507 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-048116 minikube.k8s.io/updated_at=2025_11_24T09_13_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=addons-048116 minikube.k8s.io/primary=true
	I1124 09:13:41.007715 1807735 ops.go:34] apiserver oom_adj: -16
	I1124 09:13:41.007917 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:41.508578 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:42.015042 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:42.508190 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:43.009979 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:43.508772 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:44.008463 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:44.508779 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:45.008696 1807735 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:13:45.222173 1807735 kubeadm.go:1114] duration metric: took 4.396844862s to wait for elevateKubeSystemPrivileges
	I1124 09:13:45.222222 1807735 kubeadm.go:403] duration metric: took 23.641000701s to StartCluster
	I1124 09:13:45.222291 1807735 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:45.223296 1807735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:13:45.224231 1807735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:13:45.224639 1807735 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:13:45.224753 1807735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:13:45.225052 1807735 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:13:45.225090 1807735 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 09:13:45.225604 1807735 addons.go:70] Setting yakd=true in profile "addons-048116"
	I1124 09:13:45.225626 1807735 addons.go:239] Setting addon yakd=true in "addons-048116"
	I1124 09:13:45.225661 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.226392 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.227436 1807735 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-048116"
	I1124 09:13:45.227492 1807735 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-048116"
	I1124 09:13:45.227526 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.228002 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.228609 1807735 addons.go:70] Setting cloud-spanner=true in profile "addons-048116"
	I1124 09:13:45.228643 1807735 addons.go:239] Setting addon cloud-spanner=true in "addons-048116"
	I1124 09:13:45.228684 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.229296 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.231715 1807735 out.go:179] * Verifying Kubernetes components...
	I1124 09:13:45.232105 1807735 addons.go:70] Setting metrics-server=true in profile "addons-048116"
	I1124 09:13:45.232189 1807735 addons.go:239] Setting addon metrics-server=true in "addons-048116"
	I1124 09:13:45.232254 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.234036 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.236525 1807735 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-048116"
	I1124 09:13:45.236571 1807735 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-048116"
	I1124 09:13:45.236613 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.241697 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.247289 1807735 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-048116"
	I1124 09:13:45.247690 1807735 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-048116"
	I1124 09:13:45.253854 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.254476 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.256250 1807735 addons.go:70] Setting registry=true in profile "addons-048116"
	I1124 09:13:45.256345 1807735 addons.go:239] Setting addon registry=true in "addons-048116"
	I1124 09:13:45.256399 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.257423 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.285953 1807735 addons.go:70] Setting default-storageclass=true in profile "addons-048116"
	I1124 09:13:45.286035 1807735 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-048116"
	I1124 09:13:45.286440 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.289318 1807735 addons.go:70] Setting registry-creds=true in profile "addons-048116"
	I1124 09:13:45.289354 1807735 addons.go:239] Setting addon registry-creds=true in "addons-048116"
	I1124 09:13:45.289396 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.289917 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.307772 1807735 addons.go:70] Setting gcp-auth=true in profile "addons-048116"
	I1124 09:13:45.307815 1807735 mustload.go:66] Loading cluster: addons-048116
	I1124 09:13:45.308049 1807735 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:13:45.308336 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.309198 1807735 addons.go:70] Setting storage-provisioner=true in profile "addons-048116"
	I1124 09:13:45.309236 1807735 addons.go:239] Setting addon storage-provisioner=true in "addons-048116"
	I1124 09:13:45.309284 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.309839 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.314495 1807735 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-048116"
	I1124 09:13:45.314545 1807735 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-048116"
	I1124 09:13:45.315780 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.330980 1807735 addons.go:70] Setting ingress=true in profile "addons-048116"
	I1124 09:13:45.331020 1807735 addons.go:239] Setting addon ingress=true in "addons-048116"
	I1124 09:13:45.331081 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.331582 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.336239 1807735 addons.go:70] Setting volcano=true in profile "addons-048116"
	I1124 09:13:45.336308 1807735 addons.go:239] Setting addon volcano=true in "addons-048116"
	I1124 09:13:45.336385 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.337265 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.348755 1807735 addons.go:70] Setting ingress-dns=true in profile "addons-048116"
	I1124 09:13:45.348788 1807735 addons.go:239] Setting addon ingress-dns=true in "addons-048116"
	I1124 09:13:45.348840 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.349407 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.391116 1807735 addons.go:70] Setting volumesnapshots=true in profile "addons-048116"
	I1124 09:13:45.391222 1807735 addons.go:239] Setting addon volumesnapshots=true in "addons-048116"
	I1124 09:13:45.391275 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.391875 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.392220 1807735 addons.go:70] Setting inspektor-gadget=true in profile "addons-048116"
	I1124 09:13:45.392246 1807735 addons.go:239] Setting addon inspektor-gadget=true in "addons-048116"
	I1124 09:13:45.392274 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.392702 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.426986 1807735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:13:45.531491 1807735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:13:45.531726 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.575661 1807735 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 09:13:45.575774 1807735 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 09:13:45.577223 1807735 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:13:45.577243 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:13:45.577323 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.582412 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:13:45.582446 1807735 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:13:45.582519 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.584530 1807735 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 09:13:45.544480 1807735 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 09:13:45.546390 1807735 addons.go:239] Setting addon default-storageclass=true in "addons-048116"
	I1124 09:13:45.584950 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.587810 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.597427 1807735 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 09:13:45.597492 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 09:13:45.597573 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.547300 1807735 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-048116"
	I1124 09:13:45.597964 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:45.598419 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:45.633466 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 09:13:45.633491 1807735 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 09:13:45.633557 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.645839 1807735 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 09:13:45.653856 1807735 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 09:13:45.653934 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 09:13:45.654017 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.657289 1807735 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 09:13:45.657310 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 09:13:45.657373 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	W1124 09:13:45.578200 1807735 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 09:13:45.681232 1807735 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 09:13:45.724821 1807735 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 09:13:45.732169 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 09:13:45.735355 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 09:13:45.735464 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 09:13:45.735561 1807735 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 09:13:45.735591 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 09:13:45.735685 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.742584 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 09:13:45.743049 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 09:13:45.743071 1807735 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 09:13:45.743130 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.752521 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 09:13:45.754708 1807735 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 09:13:45.756276 1807735 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 09:13:45.761563 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 09:13:45.756476 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.756514 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.757607 1807735 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 09:13:45.763339 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 09:13:45.763421 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.769435 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 09:13:45.769462 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 09:13:45.769523 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.761385 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 09:13:45.793250 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 09:13:45.797236 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 09:13:45.801238 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 09:13:45.801591 1807735 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:13:45.801606 1807735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:13:45.801665 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.807716 1807735 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 09:13:45.810772 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 09:13:45.810797 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 09:13:45.810880 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.823551 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.786861 1807735 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 09:13:45.829228 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 09:13:45.831883 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 09:13:45.831904 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 09:13:45.831975 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.832119 1807735 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 09:13:45.832132 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 09:13:45.832179 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.863494 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.866588 1807735 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 09:13:45.873096 1807735 out.go:179]   - Using image docker.io/busybox:stable
	I1124 09:13:45.873553 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.877016 1807735 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 09:13:45.877043 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 09:13:45.877095 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.877456 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:45.974752 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.976695 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.992727 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.993638 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:45.999925 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:46.010722 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:46.014245 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	W1124 09:13:46.015180 1807735 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 09:13:46.015217 1807735 retry.go:31] will retry after 316.285365ms: ssh: handshake failed: EOF
	I1124 09:13:46.033476 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:46.044353 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	W1124 09:13:46.045625 1807735 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 09:13:46.045652 1807735 retry.go:31] will retry after 134.551187ms: ssh: handshake failed: EOF
	I1124 09:13:46.083202 1807735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:13:46.083479 1807735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1124 09:13:46.183581 1807735 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 09:13:46.183663 1807735 retry.go:31] will retry after 495.615285ms: ssh: handshake failed: EOF
	I1124 09:13:46.493675 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:13:46.533715 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 09:13:46.533797 1807735 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 09:13:46.563905 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 09:13:46.565548 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 09:13:46.603801 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 09:13:46.603878 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 09:13:46.610651 1807735 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 09:13:46.610669 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 09:13:46.618294 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 09:13:46.618314 1807735 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 09:13:46.669883 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 09:13:46.678386 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 09:13:46.679101 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:13:46.679115 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 09:13:46.688227 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 09:13:46.761539 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 09:13:46.761616 1807735 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 09:13:46.776551 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 09:13:46.792948 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 09:13:46.797733 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 09:13:46.797766 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 09:13:46.807514 1807735 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 09:13:46.807538 1807735 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 09:13:46.813300 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:13:46.813326 1807735 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:13:46.817096 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:13:46.818672 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 09:13:46.895369 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 09:13:46.895395 1807735 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 09:13:46.908479 1807735 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 09:13:46.908558 1807735 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 09:13:46.917634 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 09:13:46.917711 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 09:13:46.975971 1807735 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:13:46.976048 1807735 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:13:47.076692 1807735 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 09:13:47.076714 1807735 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 09:13:47.122656 1807735 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 09:13:47.122729 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 09:13:47.123687 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 09:13:47.123738 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 09:13:47.139331 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:13:47.270583 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 09:13:47.270656 1807735 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 09:13:47.290751 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 09:13:47.295699 1807735 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 09:13:47.295781 1807735 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 09:13:47.379355 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 09:13:47.441496 1807735 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.357968223s)
	I1124 09:13:47.441612 1807735 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.358008502s)
	I1124 09:13:47.441754 1807735 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 09:13:47.443126 1807735 node_ready.go:35] waiting up to 6m0s for node "addons-048116" to be "Ready" ...
	I1124 09:13:47.461801 1807735 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 09:13:47.461877 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 09:13:47.544329 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 09:13:47.544402 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 09:13:47.577686 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 09:13:47.577707 1807735 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 09:13:47.650485 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 09:13:47.825039 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 09:13:47.825132 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 09:13:47.947455 1807735 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-048116" context rescaled to 1 replicas
	I1124 09:13:48.073033 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 09:13:48.073117 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 09:13:48.333158 1807735 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 09:13:48.333232 1807735 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 09:13:48.603380 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1124 09:13:49.466347 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:49.684959 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.119344475s)
	I1124 09:13:49.685015 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.01505652s)
	I1124 09:13:49.685093 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.191337219s)
	I1124 09:13:49.684927 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.12094727s)
	I1124 09:13:49.893081 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.214661661s)
	I1124 09:13:51.490949 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.802685523s)
	I1124 09:13:51.491035 1807735 addons.go:495] Verifying addon ingress=true in "addons-048116"
	I1124 09:13:51.491364 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.714677823s)
	I1124 09:13:51.491420 1807735 addons.go:495] Verifying addon registry=true in "addons-048116"
	I1124 09:13:51.491546 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.698537262s)
	I1124 09:13:51.491594 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.674410494s)
	I1124 09:13:51.491838 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.673142818s)
	I1124 09:13:51.491911 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.352502678s)
	I1124 09:13:51.491919 1807735 addons.go:495] Verifying addon metrics-server=true in "addons-048116"
	I1124 09:13:51.491964 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.201147872s)
	I1124 09:13:51.492328 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.112900301s)
	I1124 09:13:51.492542 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.8419797s)
	W1124 09:13:51.492576 1807735 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 09:13:51.492597 1807735 retry.go:31] will retry after 186.113889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 09:13:51.496229 1807735 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-048116 service yakd-dashboard -n yakd-dashboard
	
	I1124 09:13:51.496383 1807735 out.go:179] * Verifying ingress addon...
	I1124 09:13:51.496457 1807735 out.go:179] * Verifying registry addon...
	I1124 09:13:51.500190 1807735 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 09:13:51.501211 1807735 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 09:13:51.512854 1807735 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 09:13:51.512876 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 09:13:51.515820 1807735 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1124 09:13:51.613490 1807735 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 09:13:51.613567 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:51.679189 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 09:13:51.882866 1807735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.279362332s)
	I1124 09:13:51.882899 1807735 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-048116"
	I1124 09:13:51.887216 1807735 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 09:13:51.890920 1807735 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 09:13:51.925288 1807735 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 09:13:51.925313 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:51.946376 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:52.023848 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:52.024431 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:52.394392 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:52.504201 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:52.504653 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:52.894308 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:53.006879 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:53.006991 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:53.165702 1807735 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 09:13:53.165904 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:53.185054 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:53.302242 1807735 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 09:13:53.316030 1807735 addons.go:239] Setting addon gcp-auth=true in "addons-048116"
	I1124 09:13:53.316135 1807735 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:13:53.316631 1807735 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:13:53.333885 1807735 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 09:13:53.333942 1807735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:13:53.350970 1807735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:13:53.394973 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:53.455991 1807735 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 09:13:53.458806 1807735 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 09:13:53.461573 1807735 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 09:13:53.461606 1807735 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 09:13:53.475477 1807735 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 09:13:53.475551 1807735 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 09:13:53.489950 1807735 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 09:13:53.489977 1807735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 09:13:53.505172 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:53.505718 1807735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 09:13:53.506582 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:53.907950 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:53.949433 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:54.028468 1807735 addons.go:495] Verifying addon gcp-auth=true in "addons-048116"
	I1124 09:13:54.030697 1807735 out.go:179] * Verifying gcp-auth addon...
	I1124 09:13:54.034307 1807735 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 09:13:54.035133 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:54.035621 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:54.124610 1807735 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 09:13:54.124641 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:54.394193 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:54.505708 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:54.506079 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:54.537996 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:54.894493 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:55.005810 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:55.026420 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:55.038517 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:55.394647 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:55.503465 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:55.505273 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:55.538059 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:55.894148 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:56.005566 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:56.006431 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:56.037565 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:56.394569 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:56.447129 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:56.503492 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:56.504191 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:56.537780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:56.894413 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:57.004417 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:57.005632 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:57.037646 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:57.394858 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:57.504280 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:57.504685 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:57.537409 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:57.903780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:58.007012 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:58.007835 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:58.038029 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:58.394777 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:58.503565 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:58.504067 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:58.538213 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:58.894555 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:13:58.946615 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:13:59.004354 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:59.006086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:59.037859 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:59.394060 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:13:59.504424 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:13:59.504553 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:13:59.537325 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:13:59.895936 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:00.019415 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:00.026629 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:00.073611 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:00.394184 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:00.503143 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:00.504416 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:00.538851 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:00.894326 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:00.946726 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:01.004708 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:01.004853 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:01.037894 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:01.393871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:01.504232 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:01.504547 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:01.537342 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:01.894862 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:02.005894 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:02.007406 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:02.038103 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:02.394030 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:02.504652 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:02.504801 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:02.537876 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:02.894957 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:02.947109 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:03.009757 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:03.010159 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:03.037867 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:03.393996 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:03.503539 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:03.505844 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:03.537623 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:03.901476 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:04.007435 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:04.007579 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:04.038106 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:04.394354 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:04.505249 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:04.505586 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:04.537432 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:04.894915 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:05.007989 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:05.008278 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:05.037366 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:05.394276 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:05.447909 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:05.504259 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:05.505030 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:05.538111 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:05.901348 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:06.004376 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:06.010116 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:06.038243 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:06.394873 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:06.503642 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:06.504475 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:06.537738 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:06.894812 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:07.005219 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:07.005386 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:07.037532 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:07.393850 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:07.503562 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:07.504889 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:07.537359 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:07.895495 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:07.946289 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:08.006356 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:08.006426 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:08.037639 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:08.393645 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:08.503714 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:08.504959 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:08.537585 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:08.899251 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:09.010467 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:09.010696 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:09.037834 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:09.394148 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:09.504103 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:09.504843 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:09.537660 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:09.894873 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:09.946930 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:10.010378 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:10.010482 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:10.037547 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:10.394311 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:10.503244 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:10.504340 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:10.537205 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:10.894793 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:11.013704 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:11.014058 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:11.047311 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:11.395169 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:11.503503 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:11.504166 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:11.537950 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:11.894780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:12.006077 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:12.008835 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:12.038520 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:12.395078 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:12.447902 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:12.504538 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:12.504902 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:12.537834 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:12.893871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:13.006169 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:13.008010 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:13.037751 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:13.393904 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:13.503835 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:13.505686 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:13.537403 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:13.899695 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:14.007451 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:14.007674 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:14.041287 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:14.393931 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:14.504195 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:14.504331 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:14.537433 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:14.894901 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:14.946996 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:15.024446 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:15.024517 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:15.038803 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:15.393800 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:15.504507 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:15.504651 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:15.537825 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:15.894334 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:16.010238 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:16.011447 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:16.037585 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:16.394966 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:16.504191 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:16.504673 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:16.537632 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:16.894034 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:17.005743 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:17.006353 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:17.038058 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:17.394615 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:17.446768 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:17.503801 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:17.505217 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:17.538349 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:17.895014 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:18.006085 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:18.006620 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:18.037715 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:18.395054 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:18.504259 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:18.505452 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:18.537175 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:18.894755 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:19.004994 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:19.005326 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:19.037343 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:19.394489 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:19.504368 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:19.504795 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:19.537866 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:19.894138 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:19.946945 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:20.006999 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:20.007144 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:20.038289 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:20.394222 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:20.503902 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:20.504484 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:20.537371 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:20.894341 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:21.006612 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:21.006660 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:21.037524 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:21.394643 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:21.503566 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:21.504984 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:21.537908 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:21.894334 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:22.013888 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:22.013966 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:22.038210 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:22.394334 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:22.447021 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:22.505463 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:22.505525 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:22.538206 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:22.894471 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:23.004965 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:23.005292 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:23.038042 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:23.393942 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:23.504745 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:23.505241 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:23.538269 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:23.894398 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:24.007349 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:24.008348 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:24.038139 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:24.393880 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1124 09:14:24.447465 1807735 node_ready.go:57] node "addons-048116" has "Ready":"False" status (will retry)
	I1124 09:14:24.504663 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:24.504817 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:24.537836 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:24.894012 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:25.005512 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:25.007780 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:25.037656 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:25.394086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:25.504060 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:25.504216 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:25.538179 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:25.894394 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:26.018144 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:26.021484 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:26.058840 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:26.446876 1807735 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 09:14:26.446897 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:26.453269 1807735 node_ready.go:49] node "addons-048116" is "Ready"
	I1124 09:14:26.453296 1807735 node_ready.go:38] duration metric: took 39.01001103s for node "addons-048116" to be "Ready" ...
	I1124 09:14:26.453310 1807735 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:14:26.453367 1807735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:14:26.490830 1807735 api_server.go:72] duration metric: took 41.266143681s to wait for apiserver process to appear ...
	I1124 09:14:26.490905 1807735 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:14:26.490940 1807735 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 09:14:26.508191 1807735 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 09:14:26.511071 1807735 api_server.go:141] control plane version: v1.34.2
	I1124 09:14:26.511096 1807735 api_server.go:131] duration metric: took 20.170383ms to wait for apiserver health ...
	I1124 09:14:26.511105 1807735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:14:26.525804 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:26.526009 1807735 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 09:14:26.526060 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:26.532589 1807735 system_pods.go:59] 19 kube-system pods found
	I1124 09:14:26.532675 1807735 system_pods.go:61] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:14:26.532698 1807735 system_pods.go:61] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending
	I1124 09:14:26.532719 1807735 system_pods.go:61] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending
	I1124 09:14:26.532772 1807735 system_pods.go:61] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:26.532799 1807735 system_pods.go:61] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:26.532826 1807735 system_pods.go:61] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:26.532861 1807735 system_pods.go:61] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:26.532884 1807735 system_pods.go:61] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:26.532907 1807735 system_pods.go:61] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:26.532939 1807735 system_pods.go:61] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:26.532965 1807735 system_pods.go:61] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:26.532987 1807735 system_pods.go:61] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending
	I1124 09:14:26.533006 1807735 system_pods.go:61] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending
	I1124 09:14:26.533026 1807735 system_pods.go:61] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending
	I1124 09:14:26.533060 1807735 system_pods.go:61] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending
	I1124 09:14:26.533080 1807735 system_pods.go:61] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending
	I1124 09:14:26.533164 1807735 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending
	I1124 09:14:26.533191 1807735 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending
	I1124 09:14:26.533212 1807735 system_pods.go:61] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:26.533236 1807735 system_pods.go:74] duration metric: took 22.122211ms to wait for pod list to return data ...
	I1124 09:14:26.533270 1807735 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:14:26.550724 1807735 default_sa.go:45] found service account: "default"
	I1124 09:14:26.550956 1807735 default_sa.go:55] duration metric: took 17.663289ms for default service account to be created ...
	I1124 09:14:26.550996 1807735 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:14:26.550914 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:26.565906 1807735 system_pods.go:86] 19 kube-system pods found
	I1124 09:14:26.565999 1807735 system_pods.go:89] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:14:26.566021 1807735 system_pods.go:89] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending
	I1124 09:14:26.566057 1807735 system_pods.go:89] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending
	I1124 09:14:26.566083 1807735 system_pods.go:89] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:26.566119 1807735 system_pods.go:89] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:26.566147 1807735 system_pods.go:89] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:26.566171 1807735 system_pods.go:89] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:26.566194 1807735 system_pods.go:89] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:26.566232 1807735 system_pods.go:89] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:26.566256 1807735 system_pods.go:89] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:26.566283 1807735 system_pods.go:89] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:26.566306 1807735 system_pods.go:89] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending
	I1124 09:14:26.566338 1807735 system_pods.go:89] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending
	I1124 09:14:26.566365 1807735 system_pods.go:89] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending
	I1124 09:14:26.566389 1807735 system_pods.go:89] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending
	I1124 09:14:26.566412 1807735 system_pods.go:89] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending
	I1124 09:14:26.566442 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending
	I1124 09:14:26.566469 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending
	I1124 09:14:26.566493 1807735 system_pods.go:89] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:26.566527 1807735 retry.go:31] will retry after 266.890041ms: missing components: kube-dns
	I1124 09:14:26.867282 1807735 system_pods.go:86] 19 kube-system pods found
	I1124 09:14:26.867368 1807735 system_pods.go:89] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:14:26.867393 1807735 system_pods.go:89] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 09:14:26.867433 1807735 system_pods.go:89] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 09:14:26.867461 1807735 system_pods.go:89] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:26.867485 1807735 system_pods.go:89] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:26.867507 1807735 system_pods.go:89] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:26.867540 1807735 system_pods.go:89] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:26.867564 1807735 system_pods.go:89] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:26.867587 1807735 system_pods.go:89] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:26.867609 1807735 system_pods.go:89] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:26.867641 1807735 system_pods.go:89] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:26.867669 1807735 system_pods.go:89] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:14:26.867694 1807735 system_pods.go:89] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 09:14:26.867721 1807735 system_pods.go:89] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 09:14:26.867754 1807735 system_pods.go:89] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending
	I1124 09:14:26.867784 1807735 system_pods.go:89] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 09:14:26.867809 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending
	I1124 09:14:26.867831 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 09:14:26.867866 1807735 system_pods.go:89] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:26.867901 1807735 retry.go:31] will retry after 326.243688ms: missing components: kube-dns
	I1124 09:14:26.909888 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:27.009659 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:27.010283 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:27.108377 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:27.208981 1807735 system_pods.go:86] 19 kube-system pods found
	I1124 09:14:27.209068 1807735 system_pods.go:89] "coredns-66bc5c9577-nbktx" [f8fb570b-bdc3-42aa-ab43-b610bb60e5a5] Running
	I1124 09:14:27.209094 1807735 system_pods.go:89] "csi-hostpath-attacher-0" [a4c86076-80c5-4bba-b268-334b11c16027] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 09:14:27.209143 1807735 system_pods.go:89] "csi-hostpath-resizer-0" [0f17422f-aecf-4107-a492-e15b5e2f8e34] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 09:14:27.209174 1807735 system_pods.go:89] "csi-hostpathplugin-7cjv4" [7109430d-1382-4075-a4a8-3017ec67ceff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 09:14:27.209199 1807735 system_pods.go:89] "etcd-addons-048116" [59064330-be29-4606-bd4d-1ce20eecae05] Running
	I1124 09:14:27.209219 1807735 system_pods.go:89] "kindnet-qrx7h" [a48613f8-c8b7-469f-be2f-43cbbcd7c2bd] Running
	I1124 09:14:27.209251 1807735 system_pods.go:89] "kube-apiserver-addons-048116" [9a21db7c-83e8-461c-8b17-3b2be23f4c36] Running
	I1124 09:14:27.209276 1807735 system_pods.go:89] "kube-controller-manager-addons-048116" [b66c4cd3-86d2-430b-aee3-a2e03af4cf02] Running
	I1124 09:14:27.209305 1807735 system_pods.go:89] "kube-ingress-dns-minikube" [e796ce2b-f394-47eb-b88b-7f0c960ca793] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 09:14:27.209330 1807735 system_pods.go:89] "kube-proxy-959tb" [1cc9b107-5192-4845-823e-2c1c14be9277] Running
	I1124 09:14:27.209365 1807735 system_pods.go:89] "kube-scheduler-addons-048116" [55e5e89e-d9fa-49f1-a8a5-92ddd038e4ca] Running
	I1124 09:14:27.209399 1807735 system_pods.go:89] "metrics-server-85b7d694d7-4fg4f" [3da7bdc3-57b5-4158-a494-a2f067911493] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:14:27.209426 1807735 system_pods.go:89] "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 09:14:27.209451 1807735 system_pods.go:89] "registry-6b586f9694-d2pv7" [3a7d400a-b388-4fa6-9532-6f67effbb6b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 09:14:27.209490 1807735 system_pods.go:89] "registry-creds-764b6fb674-9dvm5" [66991aa2-12ee-40af-aa3f-298f09e784f0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 09:14:27.209512 1807735 system_pods.go:89] "registry-proxy-2xmpl" [1e3f2f55-5373-4b63-90c9-ea5a8e3513f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 09:14:27.209537 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rsz7j" [6193c88c-9174-4625-aa38-f09a89419160] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 09:14:27.209572 1807735 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zn7bf" [4863bd30-c420-492e-a4ad-3c572506e9fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 09:14:27.209601 1807735 system_pods.go:89] "storage-provisioner" [cb118803-3bb3-4a2e-a061-9044a0402dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:14:27.209626 1807735 system_pods.go:126] duration metric: took 658.603739ms to wait for k8s-apps to be running ...
	I1124 09:14:27.209650 1807735 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:14:27.209737 1807735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:14:27.235701 1807735 system_svc.go:56] duration metric: took 26.041787ms WaitForService to wait for kubelet
	I1124 09:14:27.235774 1807735 kubeadm.go:587] duration metric: took 42.011105987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:14:27.235808 1807735 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:14:27.239529 1807735 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 09:14:27.239604 1807735 node_conditions.go:123] node cpu capacity is 2
	I1124 09:14:27.239632 1807735 node_conditions.go:105] duration metric: took 3.802478ms to run NodePressure ...
	I1124 09:14:27.239661 1807735 start.go:242] waiting for startup goroutines ...
	I1124 09:14:27.395592 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:27.505211 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:27.505774 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:27.537964 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:27.895475 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:28.006592 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:28.006793 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:28.037465 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:28.395433 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:28.505685 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:28.507422 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:28.537806 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:28.895026 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:29.022403 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:29.022546 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:29.049607 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:29.394805 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:29.505156 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:29.505343 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:29.537466 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:29.895609 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:30.030992 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:30.042703 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:30.047261 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:30.395258 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:30.505371 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:30.505547 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:30.538011 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:30.900582 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:31.003674 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:31.006402 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:31.037452 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:31.395616 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:31.506363 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:31.506781 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:31.538100 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:31.895648 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:32.006859 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:32.007610 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:32.037879 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:32.394893 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:32.505995 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:32.506390 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:32.537801 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:32.895768 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:33.006510 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:33.006892 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:33.037861 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:33.395492 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:33.503764 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:33.505057 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:33.537995 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:33.894965 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:34.005726 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:34.005957 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:34.038032 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:34.394139 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:34.504593 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:34.505096 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:34.538876 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:34.896003 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:35.006845 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:35.007058 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:35.037822 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:35.394990 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:35.505188 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:35.505276 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:35.537962 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:35.895019 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:36.011398 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:36.019652 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:36.038418 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:36.395207 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:36.505259 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:36.505443 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:36.537679 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:36.895255 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:37.007789 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:37.008286 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:37.037732 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:37.394984 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:37.505343 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:37.506837 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:37.538329 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:37.894652 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:38.010451 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:38.023178 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:38.038575 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:38.395496 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:38.504353 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:38.505350 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:38.537170 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:38.899152 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:39.007126 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:39.007291 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:39.037338 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:39.395579 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:39.505254 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:39.507587 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:39.537914 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:39.894900 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:40.006947 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:40.007822 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:40.071110 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:40.394998 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:40.505240 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:40.505671 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:40.538136 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:40.894665 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:41.006781 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:41.006930 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:41.037890 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:41.394378 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:41.505136 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:41.505350 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:41.538231 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:41.895755 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:42.031940 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:42.032454 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:42.037212 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:42.394651 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:42.505505 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:42.505768 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:42.538071 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:42.894871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:43.006338 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:43.007591 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:43.037908 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:43.394853 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:43.504178 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:43.504534 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:43.537122 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:43.894666 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:44.005897 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:44.006690 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:44.037724 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:44.394856 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:44.506089 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:44.506709 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:44.537766 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:44.894928 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:45.024287 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:45.048280 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:45.049358 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:45.395614 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:45.506218 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:45.506672 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:45.537885 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:45.897287 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:46.008434 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:46.014526 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:46.038133 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:46.395294 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:46.504993 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:46.507680 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:46.539122 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:46.895106 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:47.009009 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:47.009537 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:47.038474 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:47.396841 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:47.505744 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:47.506165 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:47.538571 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:47.900562 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:48.007471 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:48.008048 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:48.038098 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:48.395204 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:48.504498 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:48.505605 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:48.606585 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:48.908048 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:49.010194 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:49.010369 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:49.038424 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:49.404923 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:49.505738 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:49.505871 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:49.538586 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:49.896183 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:50.007252 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:50.007562 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:50.042125 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:50.398717 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:50.507456 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:50.508034 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:50.541710 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:50.897685 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:51.007201 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:51.007761 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:51.039449 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:51.395457 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:51.506233 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:51.506430 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:51.546469 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:51.895072 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:52.006224 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:52.007825 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:52.037865 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:52.394505 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:52.510596 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:52.513329 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:52.537904 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:52.906907 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:53.006522 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:53.006722 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:53.041008 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:53.395081 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:53.507656 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:53.508999 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:53.541452 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:53.894886 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:54.014384 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:54.014642 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:54.038100 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:54.394483 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:54.509723 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:54.510267 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:54.615214 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:54.895434 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:55.006121 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:55.006684 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:55.038181 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:55.395329 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:55.505082 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:55.505579 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:55.537166 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:55.894578 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:56.006649 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:56.006923 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:56.038762 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:56.394150 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:56.504773 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:56.505566 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:56.537938 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:56.894998 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:57.006842 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:57.007051 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:57.038303 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:57.395346 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:57.505561 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:57.505668 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:57.537481 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:57.894955 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:58.008366 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:58.008961 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:58.107621 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:58.395584 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:58.506245 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:58.506625 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:58.538630 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:58.906891 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:59.020751 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:59.021566 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:59.049208 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:59.395539 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:14:59.504752 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:14:59.504872 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:14:59.606011 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:14:59.894839 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:00.069292 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:00.069314 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:00.078297 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:00.400887 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:00.535122 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:00.535283 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:00.539652 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:00.895478 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:01.008404 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:01.008599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:01.038500 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:01.395651 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:01.503986 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:01.504949 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:01.538681 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:01.894844 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:02.011531 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:02.012326 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:02.037409 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:02.396756 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:02.510172 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:02.510350 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:02.538069 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:02.895136 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:03.014425 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:03.014763 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:03.112576 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:03.398984 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:03.504897 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:03.505052 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:03.540409 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:03.895573 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:04.004969 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:04.007372 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:04.038343 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:04.398602 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:04.504710 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:04.504922 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:04.537849 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:04.895916 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:05.008685 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:05.008940 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:05.038296 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:05.395250 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:05.505191 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:05.506580 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:05.542401 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:05.895262 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:06.021859 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:06.021988 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:06.038137 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:06.395294 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:06.505599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:06.506041 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:06.538061 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:06.895958 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:07.010146 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:07.010283 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:07.038223 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:07.395732 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:07.505385 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 09:15:07.505885 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:07.538280 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:07.895992 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:08.007428 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:08.008187 1807735 kapi.go:107] duration metric: took 1m16.50697382s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 09:15:08.037369 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:08.395352 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:08.504116 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:08.538494 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:08.897887 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:09.012966 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:09.038306 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:09.395713 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:09.504077 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:09.537822 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:09.894086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:10.005596 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:10.038144 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:10.395321 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:10.504321 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:10.538362 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:10.894923 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:11.012189 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:11.038174 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:11.395379 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:11.503626 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:11.540039 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:11.896905 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:12.004923 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:12.041817 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:12.394526 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:12.505180 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:12.540029 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:12.895255 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:13.004512 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:13.038056 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:13.394965 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:13.503916 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:13.537625 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:13.894987 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:14.005290 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:14.036963 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:14.394745 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:14.503934 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:14.537549 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:14.894687 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:15.016468 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:15.041744 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:15.395611 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:15.503570 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:15.537470 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:15.894629 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:16.005568 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:16.037949 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:16.396673 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:16.505199 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:16.538382 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:16.895289 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:17.054861 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:17.060599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:17.396093 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:17.503512 1807735 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 09:15:17.537466 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:17.895599 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:18.004989 1807735 kapi.go:107] duration metric: took 1m26.504796214s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 09:15:18.038902 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:18.394605 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:18.537455 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:18.895770 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:19.041430 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:19.395577 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:19.538012 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:19.894421 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:20.037724 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:20.394771 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:20.538111 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:20.909217 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:21.037656 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:21.395102 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:21.538236 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:21.895597 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:22.037701 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:22.394786 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:22.538214 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:22.894774 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:23.038061 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:23.395692 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:23.538251 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:23.900298 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:24.038878 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:24.394155 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 09:15:24.538267 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:24.895663 1807735 kapi.go:107] duration metric: took 1m33.004742553s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 09:15:25.047926 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:25.538061 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:26.038025 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:26.537555 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:27.038242 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:27.537622 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:28.038184 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:28.537538 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:29.037859 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:29.538379 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:30.045818 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:30.538086 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:31.037566 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:31.538052 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:32.045627 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:32.538365 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:33.037596 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:33.538090 1807735 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 09:15:34.037492 1807735 kapi.go:107] duration metric: took 1m40.003183234s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 09:15:34.041016 1807735 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-048116 cluster.
	I1124 09:15:34.043837 1807735 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 09:15:34.046697 1807735 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 09:15:34.049723 1807735 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1124 09:15:34.053468 1807735 addons.go:530] duration metric: took 1m48.828362513s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner amd-gpu-device-plugin ingress-dns inspektor-gadget registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1124 09:15:34.053543 1807735 start.go:247] waiting for cluster config update ...
	I1124 09:15:34.053566 1807735 start.go:256] writing updated cluster config ...
	I1124 09:15:34.053874 1807735 ssh_runner.go:195] Run: rm -f paused
	I1124 09:15:34.058284 1807735 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:15:34.138880 1807735 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nbktx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.143946 1807735 pod_ready.go:94] pod "coredns-66bc5c9577-nbktx" is "Ready"
	I1124 09:15:34.143980 1807735 pod_ready.go:86] duration metric: took 5.069334ms for pod "coredns-66bc5c9577-nbktx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.146404 1807735 pod_ready.go:83] waiting for pod "etcd-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.150947 1807735 pod_ready.go:94] pod "etcd-addons-048116" is "Ready"
	I1124 09:15:34.150974 1807735 pod_ready.go:86] duration metric: took 4.542361ms for pod "etcd-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.153240 1807735 pod_ready.go:83] waiting for pod "kube-apiserver-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.158240 1807735 pod_ready.go:94] pod "kube-apiserver-addons-048116" is "Ready"
	I1124 09:15:34.158267 1807735 pod_ready.go:86] duration metric: took 5.000016ms for pod "kube-apiserver-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.160916 1807735 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.462334 1807735 pod_ready.go:94] pod "kube-controller-manager-addons-048116" is "Ready"
	I1124 09:15:34.462364 1807735 pod_ready.go:86] duration metric: took 301.423595ms for pod "kube-controller-manager-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:34.663347 1807735 pod_ready.go:83] waiting for pod "kube-proxy-959tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.063073 1807735 pod_ready.go:94] pod "kube-proxy-959tb" is "Ready"
	I1124 09:15:35.063107 1807735 pod_ready.go:86] duration metric: took 399.681581ms for pod "kube-proxy-959tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.263095 1807735 pod_ready.go:83] waiting for pod "kube-scheduler-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.663065 1807735 pod_ready.go:94] pod "kube-scheduler-addons-048116" is "Ready"
	I1124 09:15:35.663095 1807735 pod_ready.go:86] duration metric: took 399.968583ms for pod "kube-scheduler-addons-048116" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:15:35.663112 1807735 pod_ready.go:40] duration metric: took 1.604794652s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:15:35.729393 1807735 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1124 09:15:35.734873 1807735 out.go:179] * Done! kubectl is now configured to use "addons-048116" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.07753983Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4b0716202b2c40b5bdee94cbb70a42e87b8148bd5a732359fa71155f4b9f6a51 UID:46c2bdf3-43ee-4778-959c-9523d8d1f256 NetNS:/var/run/netns/afdf876a-c651-4172-868f-0c3e5bff0936 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40031ec958}] Aliases:map[]}"
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.078045642Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.081891329Z" level=info msg="Ran pod sandbox 4b0716202b2c40b5bdee94cbb70a42e87b8148bd5a732359fa71155f4b9f6a51 with infra container: default/busybox/POD" id=6c07451a-d160-4440-81da-d52b44d96f80 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.08368021Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d78e2ea1-928e-4e53-92f9-4506d2cc9241 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.083830883Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d78e2ea1-928e-4e53-92f9-4506d2cc9241 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.083876955Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d78e2ea1-928e-4e53-92f9-4506d2cc9241 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.08655471Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3c21924-9f2d-4e7a-9bc0-c77ffe79a67f name=/runtime.v1.ImageService/PullImage
	Nov 24 09:15:37 addons-048116 crio[832]: time="2025-11-24T09:15:37.089307771Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.135891128Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a3c21924-9f2d-4e7a-9bc0-c77ffe79a67f name=/runtime.v1.ImageService/PullImage
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.137071353Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aa3cb121-4fe1-4019-8164-49daaede7ca4 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.140386834Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=24f7ac4f-d17a-4850-acb3-5d7092b2edbf name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.146394281Z" level=info msg="Creating container: default/busybox/busybox" id=cbbfcfcd-11b5-4c14-9b54-ff8d6923b7f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.146547957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.155343412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.155853843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.171959474Z" level=info msg="Created container 666d4a654c782ffa91543927aed120f88b2b173da9f7103281c333090d3efe0e: default/busybox/busybox" id=cbbfcfcd-11b5-4c14-9b54-ff8d6923b7f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.17461052Z" level=info msg="Starting container: 666d4a654c782ffa91543927aed120f88b2b173da9f7103281c333090d3efe0e" id=7e2efd28-e2a3-4fe8-84e1-6595062b46db name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.176503459Z" level=info msg="Started container" PID=4979 containerID=666d4a654c782ffa91543927aed120f88b2b173da9f7103281c333090d3efe0e description=default/busybox/busybox id=7e2efd28-e2a3-4fe8-84e1-6595062b46db name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b0716202b2c40b5bdee94cbb70a42e87b8148bd5a732359fa71155f4b9f6a51
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.964506773Z" level=info msg="Removing container: b601481291d18d293fde1e9bdd739aaa465044237a31cbe7191216fcbd8f394c" id=36c33e08-c66a-49c2-a297-0b17a687ab85 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.967027536Z" level=info msg="Error loading conmon cgroup of container b601481291d18d293fde1e9bdd739aaa465044237a31cbe7191216fcbd8f394c: cgroup deleted" id=36c33e08-c66a-49c2-a297-0b17a687ab85 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.975557642Z" level=info msg="Removed container b601481291d18d293fde1e9bdd739aaa465044237a31cbe7191216fcbd8f394c: gcp-auth/gcp-auth-certs-create-2z9n5/create" id=36c33e08-c66a-49c2-a297-0b17a687ab85 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.978166209Z" level=info msg="Stopping pod sandbox: bf9a54265d95d68e8d23be75f41c27a76acd681c10de6ff5caee4149ced8a5a3" id=9bb93b32-be6f-4f16-a995-d7bc81d955d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.978225804Z" level=info msg="Stopped pod sandbox (already stopped): bf9a54265d95d68e8d23be75f41c27a76acd681c10de6ff5caee4149ced8a5a3" id=9bb93b32-be6f-4f16-a995-d7bc81d955d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.978671816Z" level=info msg="Removing pod sandbox: bf9a54265d95d68e8d23be75f41c27a76acd681c10de6ff5caee4149ced8a5a3" id=b89095c4-c221-41eb-8b04-f7de68e918f2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:15:39 addons-048116 crio[832]: time="2025-11-24T09:15:39.983911483Z" level=info msg="Removed pod sandbox: bf9a54265d95d68e8d23be75f41c27a76acd681c10de6ff5caee4149ced8a5a3" id=b89095c4-c221-41eb-8b04-f7de68e918f2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	666d4a654c782       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   4b0716202b2c4       busybox                                    default
	f6bc8bc475597       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 14 seconds ago       Running             gcp-auth                                 0                   e00c60b08a529       gcp-auth-78565c9fb4-h5h57                  gcp-auth
	35fb50b5b2713       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          24 seconds ago       Running             csi-snapshotter                          0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	2fd291f337e6c       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          25 seconds ago       Running             csi-provisioner                          0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	4802d7a3ceb22       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            27 seconds ago       Running             liveness-probe                           0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	9f97e26a753dc       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           28 seconds ago       Running             hostpath                                 0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	9d9632d112566       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                29 seconds ago       Running             node-driver-registrar                    0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	06d16a65ffa1d       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             30 seconds ago       Running             controller                               0                   97070190a552f       ingress-nginx-controller-6c8bf45fb-tzf4j   ingress-nginx
	faa3b26b74486       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            37 seconds ago       Running             gadget                                   0                   42b6d5697660b       gadget-8f498                               gadget
	6491e8608e918       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             38 seconds ago       Exited              patch                                    3                   55ea4e6b50293       gcp-auth-certs-patch-8hdjz                 gcp-auth
	233b0a07323f2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              41 seconds ago       Running             registry-proxy                           0                   21108f5875be0       registry-proxy-2xmpl                       kube-system
	bafca47aae23d       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             43 seconds ago       Exited              patch                                    2                   f9cb3eb66aed9       ingress-nginx-admission-patch-2rsq7        ingress-nginx
	cc1f77bc48cc1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      44 seconds ago       Running             volume-snapshot-controller               0                   283ac6a9ea00b       snapshot-controller-7d9fbc56b8-zn7bf       kube-system
	e4e10950f5aac       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              45 seconds ago       Running             csi-resizer                              0                   d80c8279def94       csi-hostpath-resizer-0                     kube-system
	1d60535273929       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      46 seconds ago       Running             volume-snapshot-controller               0                   9acb2949b9c71       snapshot-controller-7d9fbc56b8-rsz7j       kube-system
	f3e8c080e1d84       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   48 seconds ago       Running             csi-external-health-monitor-controller   0                   8328957bd3eac       csi-hostpathplugin-7cjv4                   kube-system
	2d27bdc7b18db       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               50 seconds ago       Running             cloud-spanner-emulator                   0                   ec8a20bbd6a22       cloud-spanner-emulator-5bdddb765-8jmm9     default
	e00cdeaf5f748       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           54 seconds ago       Running             registry                                 0                   f988ec2fb252c       registry-6b586f9694-d2pv7                  kube-system
	45be9a8bfc408       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   56 seconds ago       Exited              create                                   0                   28e8853a14afb       ingress-nginx-admission-create-r76dg       ingress-nginx
	12b1fee06478e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             56 seconds ago       Running             csi-attacher                             0                   c0faf4f32668c       csi-hostpath-attacher-0                    kube-system
	87c73e079bb84       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        59 seconds ago       Running             metrics-server                           0                   dad3ed0b11a4f       metrics-server-85b7d694d7-4fg4f            kube-system
	9718a4629047a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   cd4d80ee8ffe2       kube-ingress-dns-minikube                  kube-system
	36318f85d4174       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   f69231bf61cb1       nvidia-device-plugin-daemonset-z6qjb       kube-system
	e1a7fe70441c7       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   e8d70ee6e2f33       yakd-dashboard-5ff678cb9-ltw2s             yakd-dashboard
	dc586c45d37fe       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   bf18c7acef752       local-path-provisioner-648f6765c9-c7876    local-path-storage
	2600acc92a3f2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   5a2013cd27c71       coredns-66bc5c9577-nbktx                   kube-system
	9c09d13919482       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   cb55648838854       storage-provisioner                        kube-system
	b4982ecbf9cf9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   cba656917424c       kindnet-qrx7h                              kube-system
	94b8a43bc5c3d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   fce8406cb1cf2       kube-proxy-959tb                           kube-system
	540926b2e76ba       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   dffe4227a29c6       etcd-addons-048116                         kube-system
	49296fa79d5b5       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   14ed20b195f2a       kube-apiserver-addons-048116               kube-system
	239c1c8193a19       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   79382b436a25b       kube-scheduler-addons-048116               kube-system
	864930e920257       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   c28aefd67c75d       kube-controller-manager-addons-048116      kube-system
	
	
	==> coredns [2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c] <==
	[INFO] 10.244.0.18:50240 - 16544 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000063279s
	[INFO] 10.244.0.18:50240 - 42432 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002306681s
	[INFO] 10.244.0.18:50240 - 37930 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002189797s
	[INFO] 10.244.0.18:50240 - 23194 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00010186s
	[INFO] 10.244.0.18:50240 - 43217 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000071631s
	[INFO] 10.244.0.18:52377 - 12997 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150615s
	[INFO] 10.244.0.18:52377 - 12759 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000211006s
	[INFO] 10.244.0.18:39027 - 12835 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118213s
	[INFO] 10.244.0.18:39027 - 12613 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000190395s
	[INFO] 10.244.0.18:54109 - 5303 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122267s
	[INFO] 10.244.0.18:54109 - 5114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078713s
	[INFO] 10.244.0.18:38969 - 10071 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00143297s
	[INFO] 10.244.0.18:38969 - 9866 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001669634s
	[INFO] 10.244.0.18:40172 - 33299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141762s
	[INFO] 10.244.0.18:40172 - 33728 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127928s
	[INFO] 10.244.0.21:52327 - 7848 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000280266s
	[INFO] 10.244.0.21:47864 - 32568 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00018918s
	[INFO] 10.244.0.21:37460 - 59174 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139342s
	[INFO] 10.244.0.21:55836 - 52001 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118639s
	[INFO] 10.244.0.21:53604 - 941 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109499s
	[INFO] 10.244.0.21:35540 - 21907 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087566s
	[INFO] 10.244.0.21:39408 - 17577 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001951s
	[INFO] 10.244.0.21:46981 - 39459 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001711218s
	[INFO] 10.244.0.21:44654 - 46068 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000645612s
	[INFO] 10.244.0.21:51497 - 43640 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000867654s
	
	
	==> describe nodes <==
	Name:               addons-048116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-048116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=addons-048116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_13_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-048116
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-048116"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:13:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-048116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:15:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:15:43 +0000   Mon, 24 Nov 2025 09:13:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:15:43 +0000   Mon, 24 Nov 2025 09:13:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:15:43 +0000   Mon, 24 Nov 2025 09:13:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:15:43 +0000   Mon, 24 Nov 2025 09:14:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-048116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                ba29121a-9e25-4d48-89e2-ae8f0202b3f3
	  Boot ID:                    27a92f9c-55a4-4798-92be-317cdb891088
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-8jmm9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  gadget                      gadget-8f498                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-h5h57                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-tzf4j    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         117s
	  kube-system                 coredns-66bc5c9577-nbktx                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-7cjv4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 etcd-addons-048116                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-qrx7h                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-addons-048116                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-048116       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-959tb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-addons-048116                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 metrics-server-85b7d694d7-4fg4f             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-z6qjb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-6b586f9694-d2pv7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-creds-764b6fb674-9dvm5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 registry-proxy-2xmpl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 snapshot-controller-7d9fbc56b8-rsz7j        0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 snapshot-controller-7d9fbc56b8-zn7bf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  local-path-storage          local-path-provisioner-648f6765c9-c7876     0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ltw2s              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m2s                   kube-proxy       
	  Warning  CgroupV1                 2m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m17s)  kubelet          Node addons-048116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m17s)  kubelet          Node addons-048116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x8 over 2m17s)  kubelet          Node addons-048116 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s                   kubelet          Node addons-048116 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s                   kubelet          Node addons-048116 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s                   kubelet          Node addons-048116 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m4s                   node-controller  Node addons-048116 event: Registered Node addons-048116 in Controller
	  Normal   NodeReady                83s                    kubelet          Node addons-048116 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527] <==
	{"level":"warn","ts":"2025-11-24T09:13:35.938239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:35.951936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:35.971584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:35.986199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.003632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.020545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.038779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.056120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.081099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.093286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.117551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.126579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.142696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.159009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.174627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.208038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.224278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.238372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:36.304690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:52.092836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:13:52.107279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.017440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.039547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.061266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:14:14.077535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42706","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f6bc8bc4755979c1458661e14329600d6a8a859bf82d7f805c666a0337460b9f] <==
	2025/11/24 09:15:33 GCP Auth Webhook started!
	2025/11/24 09:15:36 Ready to marshal response ...
	2025/11/24 09:15:36 Ready to write response ...
	2025/11/24 09:15:36 Ready to marshal response ...
	2025/11/24 09:15:36 Ready to write response ...
	2025/11/24 09:15:36 Ready to marshal response ...
	2025/11/24 09:15:36 Ready to write response ...
	
	
	==> kernel <==
	 09:15:48 up  7:58,  0 user,  load average: 2.77, 2.91, 3.28
	Linux addons-048116 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8] <==
	E1124 09:14:15.718690       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:14:15.734339       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 09:14:15.734472       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:14:15.735555       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1124 09:14:17.333772       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:14:17.333881       1 metrics.go:72] Registering metrics
	I1124 09:14:17.334009       1 controller.go:711] "Syncing nftables rules"
	I1124 09:14:25.718089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:14:25.718125       1 main.go:301] handling current node
	I1124 09:14:35.718275       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:14:35.718306       1 main.go:301] handling current node
	I1124 09:14:45.718268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:14:45.718302       1 main.go:301] handling current node
	I1124 09:14:55.717885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:14:55.717983       1 main.go:301] handling current node
	I1124 09:15:05.717844       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:15:05.717881       1 main.go:301] handling current node
	I1124 09:15:15.718494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:15:15.718528       1 main.go:301] handling current node
	I1124 09:15:25.718120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:15:25.718159       1 main.go:301] handling current node
	I1124 09:15:35.718333       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:15:35.718385       1 main.go:301] handling current node
	I1124 09:15:45.717675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:15:45.717704       1 main.go:301] handling current node
	
	
	==> kube-apiserver [49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d] <==
	W1124 09:14:14.075961       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 09:14:26.069575       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.187.255:443: connect: connection refused
	E1124 09:14:26.069676       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.187.255:443: connect: connection refused" logger="UnhandledError"
	W1124 09:14:26.070423       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.187.255:443: connect: connection refused
	E1124 09:14:26.070516       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.187.255:443: connect: connection refused" logger="UnhandledError"
	W1124 09:14:26.130514       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.187.255:443: connect: connection refused
	E1124 09:14:26.130557       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.187.255:443: connect: connection refused" logger="UnhandledError"
	W1124 09:14:50.921211       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 09:14:50.921257       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1124 09:14:50.921271       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1124 09:14:50.925165       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 09:14:50.925239       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1124 09:14:50.925257       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1124 09:14:51.483669       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.66.75:443: connect: connection refused" logger="UnhandledError"
	W1124 09:14:51.483762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 09:14:51.483826       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 09:14:51.485493       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.66.75:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.66.75:443: connect: connection refused" logger="UnhandledError"
	I1124 09:14:51.614913       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 09:15:46.277264       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34230: use of closed network connection
	
	
	==> kube-controller-manager [864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84] <==
	I1124 09:13:44.041459       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:13:44.041586       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:13:44.041633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 09:13:44.041660       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 09:13:44.041712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 09:13:44.041746       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 09:13:44.041800       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:13:44.042210       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:13:44.042356       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:13:44.042453       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 09:13:44.046665       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:13:44.054314       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-048116" podCIDRs=["10.244.0.0/24"]
	I1124 09:13:44.055311       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 09:13:44.061266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1124 09:13:50.016160       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1124 09:14:14.009813       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 09:14:14.009999       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 09:14:14.010045       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 09:14:14.030718       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 09:14:14.035319       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 09:14:14.111064       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:14:14.135765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:14:29.057325       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1124 09:14:44.116548       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 09:14:44.142848       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed] <==
	I1124 09:13:45.468133       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:13:45.647109       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:13:45.798538       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:13:45.818267       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 09:13:45.818372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:13:46.130471       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:13:46.130528       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:13:46.142107       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:13:46.142425       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:13:46.142439       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:13:46.148529       1 config.go:200] "Starting service config controller"
	I1124 09:13:46.148549       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:13:46.148565       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:13:46.148569       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:13:46.148606       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:13:46.148610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:13:46.155955       1 config.go:309] "Starting node config controller"
	I1124 09:13:46.155992       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:13:46.156001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:13:46.248713       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:13:46.248769       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:13:46.248986       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c] <==
	E1124 09:13:37.107104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:13:37.107213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:13:37.107317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:13:37.107416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:13:37.109317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:13:37.109545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:13:37.109673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:13:37.109768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:13:37.109881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:13:37.109938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:13:37.921405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:13:37.964542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:13:37.974671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:13:38.021871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:13:38.054434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 09:13:38.090418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:13:38.118183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:13:38.148811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:13:38.172605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:13:38.249399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:13:38.345141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:13:38.359698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:13:38.389079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:13:38.403071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1124 09:13:41.074059       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:15:08 addons-048116 kubelet[1273]: I1124 09:15:08.600270    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2xmpl" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 09:15:09 addons-048116 kubelet[1273]: I1124 09:15:09.889312    1273 scope.go:117] "RemoveContainer" containerID="3b8776fe2da1cdc2f783d6cda2006974e006e952fa859b79c311d4fe463f15e5"
	Nov 24 09:15:10 addons-048116 kubelet[1273]: I1124 09:15:10.639884    1273 scope.go:117] "RemoveContainer" containerID="3b8776fe2da1cdc2f783d6cda2006974e006e952fa859b79c311d4fe463f15e5"
	Nov 24 09:15:11 addons-048116 kubelet[1273]: I1124 09:15:11.697099    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-8f498" podStartSLOduration=66.098415478 podStartE2EDuration="1m21.697077437s" podCreationTimestamp="2025-11-24 09:13:50 +0000 UTC" firstStartedPulling="2025-11-24 09:14:54.623594194 +0000 UTC m=+74.877893995" lastFinishedPulling="2025-11-24 09:15:10.222256153 +0000 UTC m=+90.476555954" observedRunningTime="2025-11-24 09:15:10.690532461 +0000 UTC m=+90.944832262" watchObservedRunningTime="2025-11-24 09:15:11.697077437 +0000 UTC m=+91.951377230"
	Nov 24 09:15:11 addons-048116 kubelet[1273]: I1124 09:15:11.965307    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hld2\" (UniqueName: \"kubernetes.io/projected/08d8a0b1-2819-4f46-a10a-98ff190c26bd-kube-api-access-6hld2\") pod \"08d8a0b1-2819-4f46-a10a-98ff190c26bd\" (UID: \"08d8a0b1-2819-4f46-a10a-98ff190c26bd\") "
	Nov 24 09:15:11 addons-048116 kubelet[1273]: I1124 09:15:11.967491    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d8a0b1-2819-4f46-a10a-98ff190c26bd-kube-api-access-6hld2" (OuterVolumeSpecName: "kube-api-access-6hld2") pod "08d8a0b1-2819-4f46-a10a-98ff190c26bd" (UID: "08d8a0b1-2819-4f46-a10a-98ff190c26bd"). InnerVolumeSpecName "kube-api-access-6hld2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 09:15:12 addons-048116 kubelet[1273]: I1124 09:15:12.065930    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hld2\" (UniqueName: \"kubernetes.io/projected/08d8a0b1-2819-4f46-a10a-98ff190c26bd-kube-api-access-6hld2\") on node \"addons-048116\" DevicePath \"\""
	Nov 24 09:15:12 addons-048116 kubelet[1273]: I1124 09:15:12.703839    1273 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ea4e6b502937c8dc8cf67d16d4f8c1a8a16f9014401106c03d06b3f099fba5"
	Nov 24 09:15:17 addons-048116 kubelet[1273]: I1124 09:15:17.743805    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-tzf4j" podStartSLOduration=67.384900287 podStartE2EDuration="1m26.743787748s" podCreationTimestamp="2025-11-24 09:13:51 +0000 UTC" firstStartedPulling="2025-11-24 09:14:58.214561634 +0000 UTC m=+78.468861427" lastFinishedPulling="2025-11-24 09:15:17.573449005 +0000 UTC m=+97.827748888" observedRunningTime="2025-11-24 09:15:17.742780063 +0000 UTC m=+97.997079856" watchObservedRunningTime="2025-11-24 09:15:17.743787748 +0000 UTC m=+97.998087590"
	Nov 24 09:15:21 addons-048116 kubelet[1273]: I1124 09:15:21.118448    1273 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 24 09:15:21 addons-048116 kubelet[1273]: I1124 09:15:21.118505    1273 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 24 09:15:24 addons-048116 kubelet[1273]: I1124 09:15:24.804486    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-7cjv4" podStartSLOduration=1.9185163790000002 podStartE2EDuration="58.804467383s" podCreationTimestamp="2025-11-24 09:14:26 +0000 UTC" firstStartedPulling="2025-11-24 09:14:27.007361043 +0000 UTC m=+47.261660836" lastFinishedPulling="2025-11-24 09:15:23.893312047 +0000 UTC m=+104.147611840" observedRunningTime="2025-11-24 09:15:24.801925228 +0000 UTC m=+105.056225070" watchObservedRunningTime="2025-11-24 09:15:24.804467383 +0000 UTC m=+105.058767176"
	Nov 24 09:15:30 addons-048116 kubelet[1273]: E1124 09:15:30.041271    1273 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 24 09:15:30 addons-048116 kubelet[1273]: E1124 09:15:30.042088    1273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66991aa2-12ee-40af-aa3f-298f09e784f0-gcr-creds podName:66991aa2-12ee-40af-aa3f-298f09e784f0 nodeName:}" failed. No retries permitted until 2025-11-24 09:16:34.042056482 +0000 UTC m=+174.296356275 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/66991aa2-12ee-40af-aa3f-298f09e784f0-gcr-creds") pod "registry-creds-764b6fb674-9dvm5" (UID: "66991aa2-12ee-40af-aa3f-298f09e784f0") : secret "registry-creds-gcr" not found
	Nov 24 09:15:30 addons-048116 kubelet[1273]: W1124 09:15:30.266255    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/crio-e00c60b08a529a81a4de120bb3d3e6185abb935d45ff53fa6226f4facf2c14d6 WatchSource:0}: Error finding container e00c60b08a529a81a4de120bb3d3e6185abb935d45ff53fa6226f4facf2c14d6: Status 404 returned error can't find the container with id e00c60b08a529a81a4de120bb3d3e6185abb935d45ff53fa6226f4facf2c14d6
	Nov 24 09:15:33 addons-048116 kubelet[1273]: I1124 09:15:33.890826    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d19a3a-21c4-451d-afa4-152ffdbbb5ef" path="/var/lib/kubelet/pods/63d19a3a-21c4-451d-afa4-152ffdbbb5ef/volumes"
	Nov 24 09:15:36 addons-048116 kubelet[1273]: I1124 09:15:36.743402    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-h5h57" podStartSLOduration=100.715069658 podStartE2EDuration="1m43.743383926s" podCreationTimestamp="2025-11-24 09:13:53 +0000 UTC" firstStartedPulling="2025-11-24 09:15:30.271062037 +0000 UTC m=+110.525361829" lastFinishedPulling="2025-11-24 09:15:33.299376304 +0000 UTC m=+113.553676097" observedRunningTime="2025-11-24 09:15:33.84988881 +0000 UTC m=+114.104188603" watchObservedRunningTime="2025-11-24 09:15:36.743383926 +0000 UTC m=+116.997683718"
	Nov 24 09:15:36 addons-048116 kubelet[1273]: I1124 09:15:36.799583    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv28w\" (UniqueName: \"kubernetes.io/projected/46c2bdf3-43ee-4778-959c-9523d8d1f256-kube-api-access-sv28w\") pod \"busybox\" (UID: \"46c2bdf3-43ee-4778-959c-9523d8d1f256\") " pod="default/busybox"
	Nov 24 09:15:36 addons-048116 kubelet[1273]: I1124 09:15:36.799684    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/46c2bdf3-43ee-4778-959c-9523d8d1f256-gcp-creds\") pod \"busybox\" (UID: \"46c2bdf3-43ee-4778-959c-9523d8d1f256\") " pod="default/busybox"
	Nov 24 09:15:37 addons-048116 kubelet[1273]: W1124 09:15:37.080172    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/668a21c39000bf8acb37990e0466a5a952fee21f418a868adeedb9a121ab2ecf/crio-4b0716202b2c40b5bdee94cbb70a42e87b8148bd5a732359fa71155f4b9f6a51 WatchSource:0}: Error finding container 4b0716202b2c40b5bdee94cbb70a42e87b8148bd5a732359fa71155f4b9f6a51: Status 404 returned error can't find the container with id 4b0716202b2c40b5bdee94cbb70a42e87b8148bd5a732359fa71155f4b9f6a51
	Nov 24 09:15:39 addons-048116 kubelet[1273]: I1124 09:15:39.961696    1273 scope.go:117] "RemoveContainer" containerID="b601481291d18d293fde1e9bdd739aaa465044237a31cbe7191216fcbd8f394c"
	Nov 24 09:15:40 addons-048116 kubelet[1273]: E1124 09:15:40.066157    1273 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d55a478e63c4f8ba8751f3d8779ef594a4d4f60fa661477647d28b6e5b1150bf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d55a478e63c4f8ba8751f3d8779ef594a4d4f60fa661477647d28b6e5b1150bf/diff: no such file or directory, extraDiskErr: <nil>
	Nov 24 09:15:40 addons-048116 kubelet[1273]: E1124 09:15:40.079806    1273 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/491c6c667cb6e438283431e7fb3983dc3b9e308a8a1f168a4907a70087fc12ec/diff" to get inode usage: stat /var/lib/containers/storage/overlay/491c6c667cb6e438283431e7fb3983dc3b9e308a8a1f168a4907a70087fc12ec/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-8hdjz_08d8a0b1-2819-4f46-a10a-98ff190c26bd/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-8hdjz_08d8a0b1-2819-4f46-a10a-98ff190c26bd/patch/1.log: no such file or directory
	Nov 24 09:15:42 addons-048116 kubelet[1273]: I1124 09:15:42.038778    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=3.984746896 podStartE2EDuration="6.038757049s" podCreationTimestamp="2025-11-24 09:15:36 +0000 UTC" firstStartedPulling="2025-11-24 09:15:37.084115137 +0000 UTC m=+117.338414930" lastFinishedPulling="2025-11-24 09:15:39.138125291 +0000 UTC m=+119.392425083" observedRunningTime="2025-11-24 09:15:39.872653491 +0000 UTC m=+120.126953292" watchObservedRunningTime="2025-11-24 09:15:42.038757049 +0000 UTC m=+122.293056850"
	Nov 24 09:15:43 addons-048116 kubelet[1273]: I1124 09:15:43.891097    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d8a0b1-2819-4f46-a10a-98ff190c26bd" path="/var/lib/kubelet/pods/08d8a0b1-2819-4f46-a10a-98ff190c26bd/volumes"
	
	
	==> storage-provisioner [9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6] <==
	W1124 09:15:23.242840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:25.246736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:25.254177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:27.258078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:27.263384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:29.267271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:29.273079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:31.276003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:31.284861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:33.294399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:33.305528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:35.308466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:35.313938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:37.317877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:37.326126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:39.329385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:39.334252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:41.338204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:41.345536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:43.348690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:43.353667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:45.357016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:45.361911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:47.365233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:15:47.374691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-048116 -n addons-048116
helpers_test.go:269: (dbg) Run:  kubectl --context addons-048116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-048116 describe pod ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-048116 describe pod ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5: exit status 1 (88.630921ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r76dg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2rsq7" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-9dvm5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-048116 describe pod ingress-nginx-admission-create-r76dg ingress-nginx-admission-patch-2rsq7 registry-creds-764b6fb674-9dvm5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable headlamp --alsologtostderr -v=1: exit status 11 (305.38188ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:15:49.653914 1814352 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:15:49.655705 1814352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:49.655720 1814352 out.go:374] Setting ErrFile to fd 2...
	I1124 09:15:49.655726 1814352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:49.656016 1814352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:15:49.656335 1814352 mustload.go:66] Loading cluster: addons-048116
	I1124 09:15:49.656748 1814352 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:49.656761 1814352 addons.go:622] checking whether the cluster is paused
	I1124 09:15:49.656867 1814352 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:49.656876 1814352 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:15:49.657502 1814352 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:15:49.677706 1814352 ssh_runner.go:195] Run: systemctl --version
	I1124 09:15:49.677764 1814352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:15:49.699212 1814352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:15:49.812299 1814352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:15:49.812413 1814352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:15:49.857401 1814352 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:15:49.857427 1814352 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:15:49.857432 1814352 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:15:49.857436 1814352 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:15:49.857445 1814352 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:15:49.857449 1814352 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:15:49.857452 1814352 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:15:49.857456 1814352 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:15:49.857459 1814352 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:15:49.857467 1814352 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:15:49.857474 1814352 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:15:49.857477 1814352 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:15:49.857490 1814352 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:15:49.857493 1814352 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:15:49.857497 1814352 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:15:49.857507 1814352 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:15:49.857514 1814352 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:15:49.857519 1814352 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:15:49.857522 1814352 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:15:49.857525 1814352 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:15:49.857530 1814352 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:15:49.857533 1814352 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:15:49.857536 1814352 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:15:49.857539 1814352 cri.go:89] found id: ""
	I1124 09:15:49.857590 1814352 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:15:49.874060 1814352 out.go:203] 
	W1124 09:15:49.876943 1814352 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:15:49.877042 1814352 out.go:285] * 
	* 
	W1124 09:15:49.887371 1814352 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:15:49.890165 1814352 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-8jmm9" [120bc60e-0ba5-4b0f-8417-7f43b9e8fa5b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003515227s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (322.382124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:08.617727 1814805 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:08.618664 1814805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:08.618684 1814805 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:08.618692 1814805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:08.618983 1814805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:16:08.619492 1814805 mustload.go:66] Loading cluster: addons-048116
	I1124 09:16:08.619944 1814805 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:08.619968 1814805 addons.go:622] checking whether the cluster is paused
	I1124 09:16:08.620079 1814805 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:08.620095 1814805 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:16:08.620688 1814805 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:16:08.648384 1814805 ssh_runner.go:195] Run: systemctl --version
	I1124 09:16:08.648489 1814805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:16:08.673909 1814805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:16:08.784569 1814805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:16:08.784667 1814805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:16:08.830522 1814805 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:16:08.830546 1814805 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:16:08.830551 1814805 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:16:08.830555 1814805 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:16:08.830559 1814805 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:16:08.830563 1814805 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:16:08.830567 1814805 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:16:08.830570 1814805 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:16:08.830573 1814805 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:16:08.830580 1814805 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:16:08.830583 1814805 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:16:08.830587 1814805 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:16:08.830590 1814805 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:16:08.830594 1814805 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:16:08.830597 1814805 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:16:08.830603 1814805 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:16:08.830609 1814805 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:16:08.830614 1814805 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:16:08.830617 1814805 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:16:08.830620 1814805 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:16:08.830625 1814805 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:16:08.830628 1814805 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:16:08.830631 1814805 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:16:08.830634 1814805 cri.go:89] found id: ""
	I1124 09:16:08.830684 1814805 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:16:08.848759 1814805 out.go:203] 
	W1124 09:16:08.852904 1814805 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:16:08.852935 1814805 out.go:285] * 
	* 
	W1124 09:16:08.863823 1814805 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:16:08.868026 1814805 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.33s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-048116 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-048116 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-048116 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [baff05df-a037-4892-890b-4bc82b66d7db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [baff05df-a037-4892-890b-4bc82b66d7db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [baff05df-a037-4892-890b-4bc82b66d7db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00320635s
addons_test.go:967: (dbg) Run:  kubectl --context addons-048116 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 ssh "cat /opt/local-path-provisioner/pvc-353fa42e-eb73-4deb-b40a-e91859447994_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-048116 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-048116 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (292.774639ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:12.739145 1814984 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:12.740673 1814984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:12.740732 1814984 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:12.740764 1814984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:12.741132 1814984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:16:12.741615 1814984 mustload.go:66] Loading cluster: addons-048116
	I1124 09:16:12.742258 1814984 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:12.742322 1814984 addons.go:622] checking whether the cluster is paused
	I1124 09:16:12.742497 1814984 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:12.742532 1814984 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:16:12.743140 1814984 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:16:12.760683 1814984 ssh_runner.go:195] Run: systemctl --version
	I1124 09:16:12.760736 1814984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:16:12.785153 1814984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:16:12.900977 1814984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:16:12.901073 1814984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:16:12.930949 1814984 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:16:12.930974 1814984 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:16:12.930979 1814984 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:16:12.930983 1814984 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:16:12.930990 1814984 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:16:12.930994 1814984 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:16:12.930998 1814984 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:16:12.931001 1814984 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:16:12.931004 1814984 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:16:12.931011 1814984 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:16:12.931014 1814984 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:16:12.931017 1814984 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:16:12.931020 1814984 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:16:12.931023 1814984 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:16:12.931027 1814984 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:16:12.931032 1814984 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:16:12.931036 1814984 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:16:12.931040 1814984 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:16:12.931043 1814984 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:16:12.931046 1814984 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:16:12.931050 1814984 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:16:12.931054 1814984 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:16:12.931057 1814984 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:16:12.931064 1814984 cri.go:89] found id: ""
	I1124 09:16:12.931119 1814984 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:16:12.950717 1814984 out.go:203] 
	W1124 09:16:12.953894 1814984 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:16:12.953926 1814984 out.go:285] * 
	* 
	W1124 09:16:12.968149 1814984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:16:12.971459 1814984 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-z6qjb" [f9f81e9f-df9a-4b25-b7c4-a591c3001fd3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003271091s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable nvidia-device-plugin --alsologtostderr -v=1
2025/11/24 09:16:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (326.55882ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:16:02.253280 1814591 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:16:02.254349 1814591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:02.254369 1814591 out.go:374] Setting ErrFile to fd 2...
	I1124 09:16:02.254385 1814591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:16:02.254734 1814591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:16:02.255090 1814591 mustload.go:66] Loading cluster: addons-048116
	I1124 09:16:02.258601 1814591 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:02.258680 1814591 addons.go:622] checking whether the cluster is paused
	I1124 09:16:02.258852 1814591 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:16:02.258870 1814591 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:16:02.259469 1814591 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:16:02.284300 1814591 ssh_runner.go:195] Run: systemctl --version
	I1124 09:16:02.284351 1814591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:16:02.309262 1814591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:16:02.423986 1814591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:16:02.424091 1814591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:16:02.465180 1814591 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:16:02.465211 1814591 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:16:02.465217 1814591 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:16:02.465221 1814591 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:16:02.465224 1814591 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:16:02.465228 1814591 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:16:02.465231 1814591 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:16:02.465234 1814591 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:16:02.465237 1814591 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:16:02.465243 1814591 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:16:02.465247 1814591 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:16:02.465250 1814591 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:16:02.465253 1814591 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:16:02.465256 1814591 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:16:02.465259 1814591 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:16:02.465264 1814591 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:16:02.465268 1814591 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:16:02.465271 1814591 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:16:02.465274 1814591 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:16:02.465278 1814591 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:16:02.465282 1814591 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:16:02.465285 1814591 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:16:02.465288 1814591 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:16:02.465292 1814591 cri.go:89] found id: ""
	I1124 09:16:02.465345 1814591 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:16:02.488745 1814591 out.go:203] 
	W1124 09:16:02.491891 1814591 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:16:02.491922 1814591 out.go:285] * 
	* 
	W1124 09:16:02.502800 1814591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:16:02.505717 1814591 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ltw2s" [c7dc3a50-3500-4e54-89dc-70fbd1e20ed0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006286199s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-048116 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-048116 addons disable yakd --alsologtostderr -v=1: exit status 11 (276.905961ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:15:55.958794 1814427 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:15:55.959952 1814427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:55.960007 1814427 out.go:374] Setting ErrFile to fd 2...
	I1124 09:15:55.960030 1814427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:15:55.960348 1814427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:15:55.960715 1814427 mustload.go:66] Loading cluster: addons-048116
	I1124 09:15:55.961274 1814427 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:55.961320 1814427 addons.go:622] checking whether the cluster is paused
	I1124 09:15:55.961474 1814427 config.go:182] Loaded profile config "addons-048116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:15:55.961507 1814427 host.go:66] Checking if "addons-048116" exists ...
	I1124 09:15:55.962086 1814427 cli_runner.go:164] Run: docker container inspect addons-048116 --format={{.State.Status}}
	I1124 09:15:55.981475 1814427 ssh_runner.go:195] Run: systemctl --version
	I1124 09:15:55.981535 1814427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-048116
	I1124 09:15:55.999108 1814427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34990 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/addons-048116/id_rsa Username:docker}
	I1124 09:15:56.107765 1814427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:15:56.107852 1814427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:15:56.139401 1814427 cri.go:89] found id: "35fb50b5b27134e16ec221fdb99efa707adbe55994ed22933ed0b8c37821de56"
	I1124 09:15:56.139424 1814427 cri.go:89] found id: "2fd291f337e6ca4ca6cda71477780745e3f3f4dfee7d60a46e669a78ad057dd4"
	I1124 09:15:56.139430 1814427 cri.go:89] found id: "4802d7a3ceb22fc196734b8ab1f58e013cad15a6d7c9b51bc0b10a42267a0b7b"
	I1124 09:15:56.139440 1814427 cri.go:89] found id: "9f97e26a753dcd4aa507f7cc2631245257fdfddef49dbeb4c415dc60acef7ae6"
	I1124 09:15:56.139444 1814427 cri.go:89] found id: "9d9632d1125662f916418561195bccfcc3677aad2af7d4d3ee2cc377aa4070ee"
	I1124 09:15:56.139447 1814427 cri.go:89] found id: "233b0a07323f2535fa42e106c44f74a35ec681ba1a92061a57fc3043b109f63f"
	I1124 09:15:56.139451 1814427 cri.go:89] found id: "cc1f77bc48cc10d6ddcd562f8909044fd787421f9b17dc43bd30ccaaf8bdf806"
	I1124 09:15:56.139454 1814427 cri.go:89] found id: "e4e10950f5aac649bb5e7eb876842933b68fd35c4d8214c1cc1eda91dc0d5f42"
	I1124 09:15:56.139457 1814427 cri.go:89] found id: "1d605352739297f211d6e6a0c1d3a848dd279102de0eba17318f09449458c917"
	I1124 09:15:56.139463 1814427 cri.go:89] found id: "f3e8c080e1d84dca7d745340685f5e9fe19e21103ec9040ef197a8364c09ef2d"
	I1124 09:15:56.139466 1814427 cri.go:89] found id: "e00cdeaf5f748c2c6a6948c8e264101054a5665f40d6dcab608202ff7f6aeca8"
	I1124 09:15:56.139470 1814427 cri.go:89] found id: "12b1fee06478ef0d834bf4bc1402b2c1b1856ba81fe434b8cb0784d0fafe37f2"
	I1124 09:15:56.139474 1814427 cri.go:89] found id: "87c73e079bb8455e4388019dd002c2a89b1b64e09b7332e285056fd859724a72"
	I1124 09:15:56.139480 1814427 cri.go:89] found id: "9718a4629047ab3f24b0bb73f3f4211ecc76382ae1bf6aac29e7be81aaf19bc4"
	I1124 09:15:56.139483 1814427 cri.go:89] found id: "36318f85d4174a4768e4252068d3ef72baf4c59949917c3940fdb8ef2336ae46"
	I1124 09:15:56.139488 1814427 cri.go:89] found id: "2600acc92a3f21a347caaa0b3314010a36711dfac050dbd3d283a7911bcdd26c"
	I1124 09:15:56.139496 1814427 cri.go:89] found id: "9c09d13919482903b7ac1dee4e14f95c5e4631e7e698cbca65662f681e55dfc6"
	I1124 09:15:56.139501 1814427 cri.go:89] found id: "b4982ecbf9cf9f5cad166d299c767d4345f5508895f2b12f9782228921c87de8"
	I1124 09:15:56.139504 1814427 cri.go:89] found id: "94b8a43bc5c3de76e63f2d7b966d73449b50da73669bf12bd5194049ad817fed"
	I1124 09:15:56.139507 1814427 cri.go:89] found id: "540926b2e76ba840b50e019b4c4b2b1cc04a35c4f0f83a3749800809f101c527"
	I1124 09:15:56.139512 1814427 cri.go:89] found id: "49296fa79d5b5ceb006b1efe33ee6ca06f2711e4dba7da44a7e1644b32bcd55d"
	I1124 09:15:56.139515 1814427 cri.go:89] found id: "239c1c8193a19f35f35bc0642caf5462a9fa5115a6d494fbaffc0866bda3ec7c"
	I1124 09:15:56.139518 1814427 cri.go:89] found id: "864930e920257e4fa2793c13c2a84cede443a62f723aec740b3c85f4566c7d84"
	I1124 09:15:56.139521 1814427 cri.go:89] found id: ""
	I1124 09:15:56.139580 1814427 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 09:15:56.155694 1814427 out.go:203] 
	W1124 09:15:56.159161 1814427 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:15:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 09:15:56.159187 1814427 out.go:285] * 
	* 
	W1124 09:15:56.169474 1814427 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:15:56.172855 1814427 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-048116 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-498341 --alsologtostderr -v=1]
E1124 09:35:36.849504 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:36:59.921198 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-498341 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-498341 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-498341 --alsologtostderr -v=1] stderr:
I1124 09:33:21.940979 1833593 out.go:360] Setting OutFile to fd 1 ...
I1124 09:33:21.942511 1833593 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:33:21.942541 1833593 out.go:374] Setting ErrFile to fd 2...
I1124 09:33:21.942557 1833593 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:33:21.942844 1833593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 09:33:21.943135 1833593 mustload.go:66] Loading cluster: functional-498341
I1124 09:33:21.943580 1833593 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:33:21.944065 1833593 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
I1124 09:33:21.962779 1833593 host.go:66] Checking if "functional-498341" exists ...
I1124 09:33:21.963103 1833593 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1124 09:33:22.020582 1833593 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:33:22.010477583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1124 09:33:22.020708 1833593 api_server.go:166] Checking apiserver status ...
I1124 09:33:22.020784 1833593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1124 09:33:22.020831 1833593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
I1124 09:33:22.040299 1833593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
I1124 09:33:22.152926 1833593 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4105/cgroup
I1124 09:33:22.161336 1833593 api_server.go:182] apiserver freezer: "5:freezer:/docker/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/crio/crio-15e71e99b984ad56351b668dea7807b14fb8676c4c2532e7c2ef16079ae69280"
I1124 09:33:22.161410 1833593 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/crio/crio-15e71e99b984ad56351b668dea7807b14fb8676c4c2532e7c2ef16079ae69280/freezer.state
I1124 09:33:22.169226 1833593 api_server.go:204] freezer state: "THAWED"
I1124 09:33:22.169252 1833593 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1124 09:33:22.178726 1833593 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1124 09:33:22.178781 1833593 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1124 09:33:22.178977 1833593 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:33:22.178996 1833593 addons.go:70] Setting dashboard=true in profile "functional-498341"
I1124 09:33:22.179009 1833593 addons.go:239] Setting addon dashboard=true in "functional-498341"
I1124 09:33:22.179033 1833593 host.go:66] Checking if "functional-498341" exists ...
I1124 09:33:22.179435 1833593 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
I1124 09:33:22.199679 1833593 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1124 09:33:22.202539 1833593 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1124 09:33:22.205392 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1124 09:33:22.205422 1833593 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1124 09:33:22.205523 1833593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
I1124 09:33:22.222762 1833593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
I1124 09:33:22.334776 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1124 09:33:22.334800 1833593 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1124 09:33:22.348441 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1124 09:33:22.348482 1833593 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1124 09:33:22.361647 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1124 09:33:22.361671 1833593 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1124 09:33:22.375556 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1124 09:33:22.375584 1833593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1124 09:33:22.389386 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1124 09:33:22.389408 1833593 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1124 09:33:22.403667 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1124 09:33:22.403690 1833593 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1124 09:33:22.417541 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1124 09:33:22.417600 1833593 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1124 09:33:22.431984 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1124 09:33:22.432009 1833593 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1124 09:33:22.445406 1833593 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1124 09:33:22.445430 1833593 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1124 09:33:22.459755 1833593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1124 09:33:23.253794 1833593 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-498341 addons enable metrics-server

                                                
                                                
I1124 09:33:23.256602 1833593 addons.go:202] Writing out "functional-498341" config to set dashboard=true...
W1124 09:33:23.256902 1833593 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1124 09:33:23.257626 1833593 kapi.go:59] client config for functional-498341: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1124 09:33:23.258229 1833593 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1124 09:33:23.258265 1833593 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1124 09:33:23.258272 1833593 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1124 09:33:23.258276 1833593 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1124 09:33:23.258285 1833593 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1124 09:33:23.273984 1833593 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  56395aa5-298b-44ac-9550-cd6055f64eef 1549 0 2025-11-24 09:33:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-24 09:33:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.60.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.109.60.176],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1124 09:33:23.274163 1833593 out.go:285] * Launching proxy ...
* Launching proxy ...
I1124 09:33:23.274287 1833593 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-498341 proxy --port 36195]
I1124 09:33:23.276163 1833593 dashboard.go:159] Waiting for kubectl to output host:port ...
I1124 09:33:23.334667 1833593 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1124 09:33:23.334740 1833593 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1124 09:33:23.359197 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[64921d64-94d7-4bdc-b498-de4eab4bf7c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40008680c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000160a00 TLS:<nil>}
I1124 09:33:23.359293 1833593 retry.go:31] will retry after 72.198µs: Temporary Error: unexpected response code: 503
I1124 09:33:23.364035 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf642b8a-79c7-4d9f-95c7-7c028a77b0d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40007cce00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000001b80 TLS:<nil>}
I1124 09:33:23.364115 1833593 retry.go:31] will retry after 148.05µs: Temporary Error: unexpected response code: 503
I1124 09:33:23.368124 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f9bd6a6a-8460-4042-9541-5eddea2a60c9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40007ccf00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000001cc0 TLS:<nil>}
I1124 09:33:23.368186 1833593 retry.go:31] will retry after 220.562µs: Temporary Error: unexpected response code: 503
I1124 09:33:23.372182 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a5571bfd-7edf-4dd7-bdfd-9417d4ea7a87] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40007cd0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000001e00 TLS:<nil>}
I1124 09:33:23.372242 1833593 retry.go:31] will retry after 238.688µs: Temporary Error: unexpected response code: 503
I1124 09:33:23.375932 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8eec5236-3780-473a-985b-5da44f2ee732] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40007cd880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4000 TLS:<nil>}
I1124 09:33:23.375989 1833593 retry.go:31] will retry after 735.669µs: Temporary Error: unexpected response code: 503
I1124 09:33:23.379698 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17f48011-772e-4072-bc80-a1e3ec16bf86] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40008684c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4140 TLS:<nil>}
I1124 09:33:23.379765 1833593 retry.go:31] will retry after 1.097182ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.384717 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8eae4a11-ebc6-4bd0-b183-801916d53ed6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x4000868540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000160b40 TLS:<nil>}
I1124 09:33:23.384772 1833593 retry.go:31] will retry after 1.40609ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.389970 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0024e965-1f98-490e-9554-65727698985d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40008685c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000160c80 TLS:<nil>}
I1124 09:33:23.390027 1833593 retry.go:31] will retry after 2.167568ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.396291 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6e08944-40cf-4274-ad82-cebb08562964] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x4000868640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001612c0 TLS:<nil>}
I1124 09:33:23.396348 1833593 retry.go:31] will retry after 3.075262ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.402580 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[804e884e-fa01-413b-836c-cc90cb195a83] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40008686c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000161680 TLS:<nil>}
I1124 09:33:23.402637 1833593 retry.go:31] will retry after 4.567191ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.410812 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3849e0af-6e87-461e-a230-9639e02af831] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x40007cddc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001617c0 TLS:<nil>}
I1124 09:33:23.410873 1833593 retry.go:31] will retry after 7.652783ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.422076 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[52e1a04e-5431-4250-8f27-352879fc53c0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x4000868780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000161a40 TLS:<nil>}
I1124 09:33:23.422183 1833593 retry.go:31] will retry after 10.250714ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.436240 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee633c95-491f-4826-aca4-17a18f9d7cad] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x4000868880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4280 TLS:<nil>}
I1124 09:33:23.436303 1833593 retry.go:31] will retry after 11.646581ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.456305 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[81f3da9c-0225-4560-ade1-e71d70d96aff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x400168c040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224280 TLS:<nil>}
I1124 09:33:23.456366 1833593 retry.go:31] will retry after 20.508305ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.480856 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[978cf21a-e3b2-470c-a43a-4842833fe11f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x400168c0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d43c0 TLS:<nil>}
I1124 09:33:23.480918 1833593 retry.go:31] will retry after 35.166869ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.520214 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[788b82d5-0ca6-4adb-9656-5d8eeb645abf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x400168c140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4500 TLS:<nil>}
I1124 09:33:23.520280 1833593 retry.go:31] will retry after 44.5699ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.568724 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11eb57bf-0bdf-4063-abd8-288f9118441c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x400168c200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002243c0 TLS:<nil>}
I1124 09:33:23.568811 1833593 retry.go:31] will retry after 92.621952ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.664876 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d24b43a8-c86d-462a-bb1f-710181320d5b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x400168c280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4640 TLS:<nil>}
I1124 09:33:23.664937 1833593 retry.go:31] will retry after 99.33616ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.768185 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf856687-cc33-452a-be1f-5187ed6af2ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x400168c300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4780 TLS:<nil>}
I1124 09:33:23.768268 1833593 retry.go:31] will retry after 183.891735ms: Temporary Error: unexpected response code: 503
I1124 09:33:23.955583 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04efe852-0576-46dd-b0d7-5cbda0896248] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:23 GMT]] Body:0x4000868d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224500 TLS:<nil>}
I1124 09:33:23.955643 1833593 retry.go:31] will retry after 247.361954ms: Temporary Error: unexpected response code: 503
I1124 09:33:24.207112 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df8a770b-ea45-4b1c-bb0c-f830239a1486] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:24 GMT]] Body:0x400168c400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d48c0 TLS:<nil>}
I1124 09:33:24.207185 1833593 retry.go:31] will retry after 444.952699ms: Temporary Error: unexpected response code: 503
I1124 09:33:24.656046 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28a26816-d3ba-45e6-8415-5ab44a918a6d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:24 GMT]] Body:0x4000868f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224640 TLS:<nil>}
I1124 09:33:24.656109 1833593 retry.go:31] will retry after 382.151427ms: Temporary Error: unexpected response code: 503
I1124 09:33:25.041844 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[37007bd1-76e6-4212-b5fb-3356b8d5902b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:25 GMT]] Body:0x400168c500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4a00 TLS:<nil>}
I1124 09:33:25.041922 1833593 retry.go:31] will retry after 809.150108ms: Temporary Error: unexpected response code: 503
I1124 09:33:25.855514 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe3158dd-965d-4cfa-a6fb-a48b2c83b541] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:25 GMT]] Body:0x4000869080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224780 TLS:<nil>}
I1124 09:33:25.855589 1833593 retry.go:31] will retry after 657.176929ms: Temporary Error: unexpected response code: 503
I1124 09:33:26.515968 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e0d132b-ee33-4651-8c9f-870ef1ef5e92] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:26 GMT]] Body:0x4000869180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224a00 TLS:<nil>}
I1124 09:33:26.516033 1833593 retry.go:31] will retry after 2.082570378s: Temporary Error: unexpected response code: 503
I1124 09:33:28.602794 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cbc0a680-e745-44af-814b-f0d897b86966] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:28 GMT]] Body:0x4000869200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224b40 TLS:<nil>}
I1124 09:33:28.602858 1833593 retry.go:31] will retry after 3.674839272s: Temporary Error: unexpected response code: 503
I1124 09:33:32.281684 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cde55d5e-a34a-418a-83f4-b041a933cf3e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:32 GMT]] Body:0x4000869380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224c80 TLS:<nil>}
I1124 09:33:32.281763 1833593 retry.go:31] will retry after 2.428634013s: Temporary Error: unexpected response code: 503
I1124 09:33:34.714294 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f356c251-e53a-4551-a0d3-7682f1c02183] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:34 GMT]] Body:0x4000869480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000224f00 TLS:<nil>}
I1124 09:33:34.714360 1833593 retry.go:31] will retry after 3.653564295s: Temporary Error: unexpected response code: 503
I1124 09:33:38.371392 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[503299ab-f251-49d5-82fa-af464110b9a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:38 GMT]] Body:0x40008695c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4b40 TLS:<nil>}
I1124 09:33:38.371454 1833593 retry.go:31] will retry after 5.629878586s: Temporary Error: unexpected response code: 503
I1124 09:33:44.009681 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0009b4e2-25aa-4330-8197-8e7c84f72ad8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:33:44 GMT]] Body:0x400168c880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000225040 TLS:<nil>}
I1124 09:33:44.009744 1833593 retry.go:31] will retry after 18.323175381s: Temporary Error: unexpected response code: 503
I1124 09:34:02.336600 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad7f9cf9-68af-4eee-a437-3a6cf719cfc0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:34:02 GMT]] Body:0x4000869700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4c80 TLS:<nil>}
I1124 09:34:02.336661 1833593 retry.go:31] will retry after 10.969367748s: Temporary Error: unexpected response code: 503
I1124 09:34:13.309876 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8634c1ea-3223-4bee-8213-b4d663cb299c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:34:13 GMT]] Body:0x400168c980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000225180 TLS:<nil>}
I1124 09:34:13.309937 1833593 retry.go:31] will retry after 16.993529981s: Temporary Error: unexpected response code: 503
I1124 09:34:30.309183 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8804ab6-2696-40e0-b5e3-6035aba6f050] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:34:30 GMT]] Body:0x400168ca40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4dc0 TLS:<nil>}
I1124 09:34:30.309252 1833593 retry.go:31] will retry after 34.504550644s: Temporary Error: unexpected response code: 503
I1124 09:35:04.817277 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f21c7ed7-c442-4f48-906c-d37241560e5d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:35:04 GMT]] Body:0x400168cb00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d4f00 TLS:<nil>}
I1124 09:35:04.817357 1833593 retry.go:31] will retry after 37.22466653s: Temporary Error: unexpected response code: 503
I1124 09:35:42.045610 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[691157ef-7a92-4006-9e8c-0e55f2e49738] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:35:42 GMT]] Body:0x40008680c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001d5040 TLS:<nil>}
I1124 09:35:42.045684 1833593 retry.go:31] will retry after 44.390393195s: Temporary Error: unexpected response code: 503
I1124 09:36:26.440098 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6bd5bc02-07d5-44fd-8766-74719876e4c0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:36:26 GMT]] Body:0x400168c0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000225400 TLS:<nil>}
I1124 09:36:26.440168 1833593 retry.go:31] will retry after 1m27.102508644s: Temporary Error: unexpected response code: 503
I1124 09:37:53.546904 1833593 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1faa31ff-895d-4092-90d9-151a595b3f81] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 09:37:53 GMT]] Body:0x400168c0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000225540 TLS:<nil>}
I1124 09:37:53.546973 1833593 retry.go:31] will retry after 1m18.290592109s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-498341
helpers_test.go:243: (dbg) docker inspect functional-498341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f",
	        "Created": "2025-11-24T09:19:55.998787995Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1822668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:19:56.073167874Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/hosts",
	        "LogPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f-json.log",
	        "Name": "/functional-498341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-498341:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-498341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f",
	                "LowerDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/merged",
	                "UpperDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/diff",
	                "WorkDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-498341",
	                "Source": "/var/lib/docker/volumes/functional-498341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-498341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-498341",
	                "name.minikube.sigs.k8s.io": "functional-498341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "240a4dd1813daa01947671dc124987da45cf1671d25b32d2f3891003a67f2fe7",
	            "SandboxKey": "/var/run/docker/netns/240a4dd1813d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35000"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35001"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35004"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35002"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35003"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-498341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:13:66:1a:33:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c1abece26d5d0dbba2a759db10fd2d39adcd67e3473b07c793acf2b30828945",
	                    "EndpointID": "08ec2567ced7c59a5794dec53e18552f77fb769f748c21d66ed8b80f607753ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-498341",
	                        "6c463f059d60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-498341 -n functional-498341
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 logs -n 25: (1.489810596s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image load --daemon kicbase/echo-server:functional-498341 --alsologtostderr                                                             │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image save kicbase/echo-server:functional-498341 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image rm kicbase/echo-server:functional-498341 --alsologtostderr                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image save --daemon kicbase/echo-server:functional-498341 --alsologtostderr                                                             │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/1806704.pem                                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/1806704.pem                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/18067042.pem                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/18067042.pem                                                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/test/nested/copy/1806704/hosts                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format short --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format yaml --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh pgrep buildkitd                                                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │                     │
	│ image          │ functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:33:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:33:21.672378 1833519 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:33:21.672540 1833519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:33:21.672552 1833519 out.go:374] Setting ErrFile to fd 2...
	I1124 09:33:21.672557 1833519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:33:21.672847 1833519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:33:21.673254 1833519 out.go:368] Setting JSON to false
	I1124 09:33:21.674147 1833519 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29752,"bootTime":1763947050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:33:21.674221 1833519 start.go:143] virtualization:  
	I1124 09:33:21.677354 1833519 out.go:179] * [functional-498341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:33:21.681086 1833519 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:33:21.681276 1833519 notify.go:221] Checking for updates...
	I1124 09:33:21.686869 1833519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:33:21.689680 1833519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:33:21.692557 1833519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:33:21.695898 1833519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:33:21.698749 1833519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:33:21.702131 1833519 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:33:21.702804 1833519 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:33:21.730951 1833519 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:33:21.731058 1833519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:33:21.790415 1833519 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:33:21.781169724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:33:21.790517 1833519 docker.go:319] overlay module found
	I1124 09:33:21.793678 1833519 out.go:179] * Using the docker driver based on existing profile
	I1124 09:33:21.796512 1833519 start.go:309] selected driver: docker
	I1124 09:33:21.796529 1833519 start.go:927] validating driver "docker" against &{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:33:21.796648 1833519 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:33:21.796761 1833519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:33:21.875903 1833519 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:33:21.865552032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:33:21.876319 1833519 cni.go:84] Creating CNI manager for ""
	I1124 09:33:21.876386 1833519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:33:21.876434 1833519 start.go:353] cluster config:
	{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:33:21.881262 1833519 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 24 09:37:28 functional-498341 crio[3532]: time="2025-11-24T09:37:28.009576313Z" level=info msg="Image localhost/kicbase/echo-server:functional-498341 not found" id=bcae0c96-ee87-47d3-a605-ac97336eb40c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:28 functional-498341 crio[3532]: time="2025-11-24T09:37:28.009617774Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-498341 found" id=bcae0c96-ee87-47d3-a605-ac97336eb40c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:29 functional-498341 crio[3532]: time="2025-11-24T09:37:29.495013505Z" level=info msg="Checking image status: kicbase/echo-server:functional-498341" id=ec4d51cd-7fd5-4301-a662-1c940d956e7b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:29 functional-498341 crio[3532]: time="2025-11-24T09:37:29.520774288Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-498341" id=b33cb301-8fab-4141-a11e-3a4c605cdedd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:29 functional-498341 crio[3532]: time="2025-11-24T09:37:29.520970434Z" level=info msg="Image docker.io/kicbase/echo-server:functional-498341 not found" id=b33cb301-8fab-4141-a11e-3a4c605cdedd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:29 functional-498341 crio[3532]: time="2025-11-24T09:37:29.52102696Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-498341 found" id=b33cb301-8fab-4141-a11e-3a4c605cdedd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:29 functional-498341 crio[3532]: time="2025-11-24T09:37:29.552113455Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-498341" id=0723eaf1-3fe3-41f1-a005-6d448f3c9632 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:29 functional-498341 crio[3532]: time="2025-11-24T09:37:29.552274975Z" level=info msg="Image localhost/kicbase/echo-server:functional-498341 not found" id=0723eaf1-3fe3-41f1-a005-6d448f3c9632 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:29 functional-498341 crio[3532]: time="2025-11-24T09:37:29.552345064Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-498341 found" id=0723eaf1-3fe3-41f1-a005-6d448f3c9632 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:30 functional-498341 crio[3532]: time="2025-11-24T09:37:30.377938343Z" level=info msg="Checking image status: kicbase/echo-server:functional-498341" id=d3cbea7e-e8a0-4793-b163-f83b090a1e66 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:30 functional-498341 crio[3532]: time="2025-11-24T09:37:30.403246443Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-498341" id=08373926-a088-4eb9-8442-a504ef802271 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:30 functional-498341 crio[3532]: time="2025-11-24T09:37:30.403405461Z" level=info msg="Image docker.io/kicbase/echo-server:functional-498341 not found" id=08373926-a088-4eb9-8442-a504ef802271 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:30 functional-498341 crio[3532]: time="2025-11-24T09:37:30.403460034Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-498341 found" id=08373926-a088-4eb9-8442-a504ef802271 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:30 functional-498341 crio[3532]: time="2025-11-24T09:37:30.428140056Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-498341" id=72e42834-e2c0-4517-85c0-f43b1c87f680 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:30 functional-498341 crio[3532]: time="2025-11-24T09:37:30.428297876Z" level=info msg="Image localhost/kicbase/echo-server:functional-498341 not found" id=72e42834-e2c0-4517-85c0-f43b1c87f680 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:30 functional-498341 crio[3532]: time="2025-11-24T09:37:30.428350143Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-498341 found" id=72e42834-e2c0-4517-85c0-f43b1c87f680 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:39 functional-498341 crio[3532]: time="2025-11-24T09:37:39.976595114Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=40c32f5b-b392-4c42-98c5-5d3ce9e21d17 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:37:39 functional-498341 crio[3532]: time="2025-11-24T09:37:39.979062747Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Nov 24 09:37:51 functional-498341 crio[3532]: time="2025-11-24T09:37:51.110554822Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=00606050-a974-4913-80ee-6b07e90ddd4b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:51 functional-498341 crio[3532]: time="2025-11-24T09:37:51.110803646Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=00606050-a974-4913-80ee-6b07e90ddd4b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:37:51 functional-498341 crio[3532]: time="2025-11-24T09:37:51.110871199Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=00606050-a974-4913-80ee-6b07e90ddd4b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:06 functional-498341 crio[3532]: time="2025-11-24T09:38:06.111366403Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=2ed215c2-6eeb-4816-8706-74506bece73e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:06 functional-498341 crio[3532]: time="2025-11-24T09:38:06.111552465Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=2ed215c2-6eeb-4816-8706-74506bece73e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:06 functional-498341 crio[3532]: time="2025-11-24T09:38:06.111600359Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=2ed215c2-6eeb-4816-8706-74506bece73e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:10 functional-498341 crio[3532]: time="2025-11-24T09:38:10.287547094Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	48d016de0ba2b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   5 minutes ago       Exited              mount-munger              0                   71422e0096c46       busybox-mount                               default
	c637ec76ab83c       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90       15 minutes ago      Running             nginx                     0                   63cf0c579e198       nginx-svc                                   default
	1fb0c9b9a85a0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 minutes ago      Running             storage-provisioner       4                   feaeb05d97102       storage-provisioner                         kube-system
	71b403f584411       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      15 minutes ago      Running             kube-proxy                3                   191e021fbd5e9       kube-proxy-4n9vx                            kube-system
	a891e2cd44b94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      15 minutes ago      Running             kindnet-cni               3                   4a18924a512d0       kindnet-dxrpc                               kube-system
	15e71e99b984a       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      15 minutes ago      Running             kube-apiserver            0                   d52339a0af995       kube-apiserver-functional-498341            kube-system
	baa104dad4c40       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      15 minutes ago      Running             kube-scheduler            3                   b10cd6e774a9f       kube-scheduler-functional-498341            kube-system
	c108a7442e642       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      15 minutes ago      Running             kube-controller-manager   3                   74279f5069235       kube-controller-manager-functional-498341   kube-system
	9aa825831f876       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      15 minutes ago      Running             etcd                      3                   56ec423187fee       etcd-functional-498341                      kube-system
	559498b49775e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      16 minutes ago      Exited              storage-provisioner       3                   feaeb05d97102       storage-provisioner                         kube-system
	4f74f3fa64dec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      16 minutes ago      Running             coredns                   2                   edc04b4c96ff0       coredns-66bc5c9577-vfd2t                    kube-system
	04dc6d3814bef       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      16 minutes ago      Exited              kube-controller-manager   2                   74279f5069235       kube-controller-manager-functional-498341   kube-system
	49717583e9f2f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      16 minutes ago      Exited              etcd                      2                   56ec423187fee       etcd-functional-498341                      kube-system
	93d44c5402102       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      16 minutes ago      Exited              kube-scheduler            2                   b10cd6e774a9f       kube-scheduler-functional-498341            kube-system
	3e05c09486ff2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      16 minutes ago      Exited              kube-proxy                2                   191e021fbd5e9       kube-proxy-4n9vx                            kube-system
	d917fc755b360       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      16 minutes ago      Exited              kindnet-cni               2                   4a18924a512d0       kindnet-dxrpc                               kube-system
	145da221b5b69       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      17 minutes ago      Exited              coredns                   1                   edc04b4c96ff0       coredns-66bc5c9577-vfd2t                    kube-system
	
	
	==> coredns [145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41033 - 41467 "HINFO IN 7622388333576306592.7037694830251736267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031822616s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4f74f3fa64dec4ab5760c54d4c13bd86a207e5012bffa99ac8d9fa91691713d5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36179 - 13838 "HINFO IN 7540107475593800547.8826876563962152014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057152215s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41354->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41364->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41374->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-498341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-498341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=functional-498341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_20_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:20:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-498341
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:38:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:37:59 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:37:59 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:37:59 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:37:59 +0000   Mon, 24 Nov 2025 09:21:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-498341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                b19cc9fb-383b-4269-9c57-72146af388e0
	  Boot ID:                    27a92f9c-55a4-4798-92be-317cdb891088
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-t27wr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-ktl8q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-vfd2t                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-functional-498341                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-dxrpc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-functional-498341              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-498341     200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-4n9vx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-498341              100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-scnr9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7cvx5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     18m                kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           18m                node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	  Normal   NodeReady                17m                kubelet          Node functional-498341 status is now: NodeReady
	  Normal   RegisteredNode           16m                node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)  kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	
	
	==> etcd [49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316] <==
	{"level":"info","ts":"2025-11-24T09:22:01.780383Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T09:22:01.781185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:22:01.782015Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:22:01.784941Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-24T09:22:01.797885Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-24T09:22:01.798067Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:22:01.800939Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-11-24T09:22:02.538050Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T09:22:02.538155Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-498341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T09:22:02.538299Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T09:22:02.538404Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T09:22:02.541194Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.541338Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T09:22:02.541443Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T09:22:02.541493Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541743Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541816Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T09:22:02.541856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541928Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541963Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T09:22:02.541994Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.549914Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T09:22:02.550079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.550142Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T09:22:02.550190Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-498341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [9aa825831f876fd8076d516a591bb4a899307d3383d1d114c317d0483577d5e2] <==
	{"level":"warn","ts":"2025-11-24T09:22:29.593633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.617017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.662247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.693042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.726777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.745495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.766147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.779633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.801975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.813601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.859384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.877233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.893999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.917660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.936429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.969446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.985823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.999509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:30.089235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T09:32:28.297315Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2025-11-24T09:32:28.320773Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1071,"took":"23.13167ms","hash":2743139146,"current-db-size-bytes":3289088,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-24T09:32:28.320845Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2743139146,"revision":1071,"compact-revision":-1}
	{"level":"info","ts":"2025-11-24T09:37:28.309505Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1434}
	{"level":"info","ts":"2025-11-24T09:37:28.314168Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1434,"took":"4.114702ms","hash":2602597981,"current-db-size-bytes":3289088,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":2256896,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-11-24T09:37:28.314226Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2602597981,"revision":1434,"compact-revision":1071}
	
	
	==> kernel <==
	 09:38:23 up  8:20,  0 user,  load average: 0.52, 0.33, 1.09
	Linux functional-498341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a891e2cd44b943fcb0b33577c5e1ba116b71c5708ee7e684e46226d679200d3e] <==
	I1124 09:36:21.707995       1 main.go:301] handling current node
	I1124 09:36:31.707906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:36:31.708005       1 main.go:301] handling current node
	I1124 09:36:41.708385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:36:41.708425       1 main.go:301] handling current node
	I1124 09:36:51.708099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:36:51.708154       1 main.go:301] handling current node
	I1124 09:37:01.714038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:37:01.714071       1 main.go:301] handling current node
	I1124 09:37:11.708201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:37:11.708239       1 main.go:301] handling current node
	I1124 09:37:21.708251       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:37:21.708302       1 main.go:301] handling current node
	I1124 09:37:31.708255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:37:31.708286       1 main.go:301] handling current node
	I1124 09:37:41.715185       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:37:41.715217       1 main.go:301] handling current node
	I1124 09:37:51.708057       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:37:51.708089       1 main.go:301] handling current node
	I1124 09:38:01.708179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:38:01.708216       1 main.go:301] handling current node
	I1124 09:38:11.708050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:38:11.708081       1 main.go:301] handling current node
	I1124 09:38:21.708067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:38:21.708122       1 main.go:301] handling current node
	
	
	==> kindnet [d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0] <==
	I1124 09:22:00.524698       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:22:00.524978       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 09:22:00.525163       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:22:00.525178       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:22:00.525193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:22:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:22:00.843833       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:22:00.843949       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:22:00.843993       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:22:00.844168       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 09:22:10.844778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:22:10.845799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 09:22:10.846053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:22:10.852198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 09:22:21.928399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 09:22:22.048527       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:22:22.272677       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:22:22.402635       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	
	
	==> kube-apiserver [15e71e99b984ad56351b668dea7807b14fb8676c4c2532e7c2ef16079ae69280] <==
	I1124 09:22:30.895777       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:22:30.904992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:22:30.905062       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 09:22:30.910665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:22:30.914166       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1124 09:22:30.915452       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:22:30.917713       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 09:22:31.120722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:22:31.704410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:22:32.581570       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:22:32.700466       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:22:32.771113       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:22:32.778890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:22:36.293726       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:22:36.297775       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:22:36.299633       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:22:48.647984       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.152.182"}
	I1124 09:22:54.729238       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.117.210"}
	I1124 09:23:03.397458       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.131.230"}
	E1124 09:23:13.428237       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50192: use of closed network connection
	I1124 09:27:17.840622       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.65.110"}
	I1124 09:32:30.820252       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:33:22.904837       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:33:23.218598       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.60.176"}
	I1124 09:33:23.243662       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.213.242"}
	
	
	==> kube-controller-manager [04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e] <==
	
	
	==> kube-controller-manager [c108a7442e642f500cad5954b3fface6603225ecb02334b8443c670f0ef39abc] <==
	I1124 09:22:34.216070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:22:34.216168       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:22:34.221500       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 09:22:34.222707       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:22:34.223836       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 09:22:34.251290       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:22:34.252574       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:22:34.255166       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 09:22:34.256340       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:22:34.256393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:22:34.256439       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:22:34.257650       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:22:34.257654       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:22:34.258886       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:22:34.261228       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:22:34.261258       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:22:34.262384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:22:34.267714       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:34.270102       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 09:33:23.049852       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 09:33:23.073280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 09:33:23.073795       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 09:33:23.090778       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 09:33:23.104864       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 09:33:23.107210       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26] <==
	I1124 09:22:01.980453       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:02.484340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1124 09:22:12.590067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-498341&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [71b403f5844112bd1e54c4ac1415199069711a4ca59aeb173507308c18b0aa8d] <==
	I1124 09:22:31.472338       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:31.566912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:22:31.667564       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:22:31.667606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 09:22:31.667698       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:22:31.687130       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:22:31.687187       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:22:31.691183       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:22:31.691508       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:22:31.691534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:31.694760       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:22:31.694838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:22:31.695221       1 config.go:200] "Starting service config controller"
	I1124 09:22:31.695280       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:22:31.695628       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:22:31.695698       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:22:31.696187       1 config.go:309] "Starting node config controller"
	I1124 09:22:31.696252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:22:31.696283       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:22:31.795487       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:22:31.795558       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:22:31.795818       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295] <==
	I1124 09:22:02.766884       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [baa104dad4c402409f627a01e3f9b0455ab0b1a3b1f384be692c3db9bf5b6e79] <==
	I1124 09:22:27.117381       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:22:30.792631       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:22:30.793242       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:22:30.793313       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:22:30.793345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:22:30.835359       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 09:22:30.835471       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:30.838156       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:22:30.838206       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:22:30.839079       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:22:30.839608       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:22:30.939283       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:36:19 functional-498341 kubelet[4036]: E1124 09:36:19.110029    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:36:20 functional-498341 kubelet[4036]: E1124 09:36:20.109495    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:36:22 functional-498341 kubelet[4036]: E1124 09:36:22.109487    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:36:35 functional-498341 kubelet[4036]: E1124 09:36:35.109705    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:36:36 functional-498341 kubelet[4036]: E1124 09:36:36.110069    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:36:47 functional-498341 kubelet[4036]: E1124 09:36:47.109788    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:36:49 functional-498341 kubelet[4036]: E1124 09:36:49.109722    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:37:00 functional-498341 kubelet[4036]: E1124 09:37:00.123290    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:37:01 functional-498341 kubelet[4036]: E1124 09:37:01.109691    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:37:12 functional-498341 kubelet[4036]: E1124 09:37:12.110185    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:37:13 functional-498341 kubelet[4036]: E1124 09:37:13.109968    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:37:27 functional-498341 kubelet[4036]: E1124 09:37:27.115037    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:37:28 functional-498341 kubelet[4036]: E1124 09:37:28.110162    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:37:39 functional-498341 kubelet[4036]: E1124 09:37:39.975664    4036 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:7604
9887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 24 09:37:39 functional-498341 kubelet[4036]: E1124 09:37:39.976198    4036 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf98
2b22d29f356092ce206e98765c"
	Nov 24 09:37:39 functional-498341 kubelet[4036]: E1124 09:37:39.976525    4036 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-scnr9_kubernetes-dashboard(5642d77f-34f7-4e71-aa46-2b4b3985fc42): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your una
uthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 24 09:37:39 functional-498341 kubelet[4036]: E1124 09:37:39.977013    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube
rnetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-scnr9" podUID="5642d77f-34f7-4e71-aa46-2b4b3985fc42"
	Nov 24 09:37:40 functional-498341 kubelet[4036]: E1124 09:37:40.109995    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:37:42 functional-498341 kubelet[4036]: E1124 09:37:42.110445    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:37:51 functional-498341 kubelet[4036]: E1124 09:37:51.111262    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in
docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-scnr9" podUID="5642d77f-34f7-4e71-aa46-2b4b3985fc42"
	Nov 24 09:37:55 functional-498341 kubelet[4036]: E1124 09:37:55.109924    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:37:57 functional-498341 kubelet[4036]: E1124 09:37:57.109446    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:38:08 functional-498341 kubelet[4036]: E1124 09:38:08.109409    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:38:09 functional-498341 kubelet[4036]: E1124 09:38:09.109556    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:38:20 functional-498341 kubelet[4036]: E1124 09:38:20.110736    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	
	
	==> storage-provisioner [1fb0c9b9a85a0fec8a1ab2c37119c62c6681f8e5e630a9272f50a23e10b7fd9a] <==
	W1124 09:37:59.180026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:01.183147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:01.189051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:03.191734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:03.198893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:05.201500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:05.206164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:07.210213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:07.214969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:09.217636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:09.224855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:11.227483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:11.231723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:13.234667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:13.241509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:15.244689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:15.249188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:17.252686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:17.257175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:19.260579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:19.264709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:21.267332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:21.271588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:23.275090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:38:23.283704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [559498b49775e56118c49fa50a90d10b8e09907d7e647d35eb62a47bc1b3323c] <==
	I1124 09:22:13.582872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:22:23.886089       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-498341 -n functional-498341
helpers_test.go:269: (dbg) Run:  kubectl --context functional-498341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-t27wr hello-node-connect-7d85dfc575-ktl8q sp-pod dashboard-metrics-scraper-77bf4d6c4c-scnr9 kubernetes-dashboard-855c9754f9-7cvx5
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-498341 describe pod busybox-mount hello-node-75c85bcc94-t27wr hello-node-connect-7d85dfc575-ktl8q sp-pod dashboard-metrics-scraper-77bf4d6c4c-scnr9 kubernetes-dashboard-855c9754f9-7cvx5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-498341 describe pod busybox-mount hello-node-75c85bcc94-t27wr hello-node-connect-7d85dfc575-ktl8q sp-pod dashboard-metrics-scraper-77bf4d6c4c-scnr9 kubernetes-dashboard-855c9754f9-7cvx5: exit status 1 (117.624996ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:33:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://48d016de0ba2bca05ee6afe3e9e192288048f524180b0a5fe777751224642c25
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 09:33:12 +0000
	      Finished:     Mon, 24 Nov 2025 09:33:12 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-89mjr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-89mjr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-498341
	  Normal  Pulling    5m14s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m12s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.04s (2.04s including waiting). Image size: 3774172 bytes.
	  Normal  Created    5m12s  kubelet            Created container: mount-munger
	  Normal  Started    5m12s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-t27wr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:27:17 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvjpt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vvjpt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-t27wr to functional-498341
	  Normal   Pulling    7m25s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m15s (x5 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m15s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    57s (x33 over 11m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     57s (x33 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-ktl8q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:23:03 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dch74 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dch74:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ktl8q to functional-498341
	  Normal   Pulling    9m51s (x5 over 15m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     9m43s (x5 over 15m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     9m43s (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    5m15s (x31 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m15s (x31 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:23:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s4942 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-s4942:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/sp-pod to functional-498341
	  Warning  Failed     12m                   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m15s (x5 over 15m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m15s (x3 over 14m)   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m15s (x5 over 14m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m58s (x16 over 14m)  kubelet            Error: ImagePullBackOff
	  Warning  Failed     2m15s (x2 over 11m)   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    0s (x32 over 14m)     kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-scnr9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7cvx5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-498341 describe pod busybox-mount hello-node-75c85bcc94-t27wr hello-node-connect-7d85dfc575-ktl8q sp-pod dashboard-metrics-scraper-77bf4d6c4c-scnr9 kubernetes-dashboard-855c9754f9-7cvx5: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-498341 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-498341 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ktl8q" [ab1e6451-329d-49eb-83f1-7cc1b00f3e21] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-498341 -n functional-498341
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 09:33:03.755958842 +0000 UTC m=+1236.335791147
functional_test.go:1645: (dbg) Run:  kubectl --context functional-498341 describe po hello-node-connect-7d85dfc575-ktl8q -n default
functional_test.go:1645: (dbg) kubectl --context functional-498341 describe po hello-node-connect-7d85dfc575-ktl8q -n default:
Name:             hello-node-connect-7d85dfc575-ktl8q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-498341/192.168.49.2
Start Time:       Mon, 24 Nov 2025 09:23:03 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dch74 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dch74:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ktl8q to functional-498341
Normal   Pulling    4m30s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     4m22s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     4m22s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     3m18s (x16 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m16s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-498341 logs hello-node-connect-7d85dfc575-ktl8q -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-498341 logs hello-node-connect-7d85dfc575-ktl8q -n default: exit status 1 (102.618567ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ktl8q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-498341 logs hello-node-connect-7d85dfc575-ktl8q -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-498341 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-ktl8q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-498341/192.168.49.2
Start Time:       Mon, 24 Nov 2025 09:23:03 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dch74 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dch74:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ktl8q to functional-498341
Normal   Pulling    4m31s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     4m23s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     4m23s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     3m19s (x16 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m17s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-498341 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-498341 logs -l app=hello-node-connect: exit status 1 (99.490871ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ktl8q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-498341 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-498341 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.131.230
IPs:                      10.99.131.230
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30929/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-498341
helpers_test.go:243: (dbg) docker inspect functional-498341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f",
	        "Created": "2025-11-24T09:19:55.998787995Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1822668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:19:56.073167874Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/hosts",
	        "LogPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f-json.log",
	        "Name": "/functional-498341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-498341:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-498341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f",
	                "LowerDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/merged",
	                "UpperDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/diff",
	                "WorkDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-498341",
	                "Source": "/var/lib/docker/volumes/functional-498341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-498341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-498341",
	                "name.minikube.sigs.k8s.io": "functional-498341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "240a4dd1813daa01947671dc124987da45cf1671d25b32d2f3891003a67f2fe7",
	            "SandboxKey": "/var/run/docker/netns/240a4dd1813d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35000"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35001"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35004"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35002"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35003"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-498341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:13:66:1a:33:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c1abece26d5d0dbba2a759db10fd2d39adcd67e3473b07c793acf2b30828945",
	                    "EndpointID": "08ec2567ced7c59a5794dec53e18552f77fb769f748c21d66ed8b80f607753ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-498341",
	                        "6c463f059d60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-498341 -n functional-498341
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 logs -n 25: (1.485138133s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-498341 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ kubectl │ functional-498341 kubectl -- --context functional-498341 get pods                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ start   │ -p functional-498341 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ service │ invalid-svc -p functional-498341                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ config  │ functional-498341 config unset cpus                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ cp      │ functional-498341 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config get cpus                                                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ config  │ functional-498341 config set cpus 2                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config get cpus                                                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config unset cpus                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh -n functional-498341 sudo cat /home/docker/cp-test.txt                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config get cpus                                                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ ssh     │ functional-498341 ssh echo hello                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ cp      │ functional-498341 cp functional-498341:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2522289623/001/cp-test.txt │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh cat /etc/hostname                                                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh -n functional-498341 sudo cat /home/docker/cp-test.txt                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ tunnel  │ functional-498341 tunnel --alsologtostderr                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ tunnel  │ functional-498341 tunnel --alsologtostderr                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ cp      │ functional-498341 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh -n functional-498341 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ tunnel  │ functional-498341 tunnel --alsologtostderr                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ addons  │ functional-498341 addons list                                                                                              │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ addons  │ functional-498341 addons list -o json                                                                                      │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:21:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:21:45.751921 1826875 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:21:45.752047 1826875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:21:45.752051 1826875 out.go:374] Setting ErrFile to fd 2...
	I1124 09:21:45.752054 1826875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:21:45.752324 1826875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:21:45.752672 1826875 out.go:368] Setting JSON to false
	I1124 09:21:45.753610 1826875 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29056,"bootTime":1763947050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:21:45.753677 1826875 start.go:143] virtualization:  
	I1124 09:21:45.757154 1826875 out.go:179] * [functional-498341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:21:45.760190 1826875 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:21:45.760291 1826875 notify.go:221] Checking for updates...
	I1124 09:21:45.766186 1826875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:21:45.769174 1826875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:21:45.772188 1826875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:21:45.775289 1826875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:21:45.778333 1826875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:21:45.781989 1826875 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:21:45.782120 1826875 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:21:45.812937 1826875 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:21:45.813041 1826875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:21:45.882883 1826875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 09:21:45.872964508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:21:45.882982 1826875 docker.go:319] overlay module found
	I1124 09:21:45.886170 1826875 out.go:179] * Using the docker driver based on existing profile
	I1124 09:21:45.889003 1826875 start.go:309] selected driver: docker
	I1124 09:21:45.889014 1826875 start.go:927] validating driver "docker" against &{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:21:45.889150 1826875 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:21:45.889260 1826875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:21:45.955827 1826875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 09:21:45.946686867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:21:45.956233 1826875 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:21:45.956258 1826875 cni.go:84] Creating CNI manager for ""
	I1124 09:21:45.956313 1826875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:21:45.956360 1826875 start.go:353] cluster config:
	{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:21:45.959489 1826875 out.go:179] * Starting "functional-498341" primary control-plane node in "functional-498341" cluster
	I1124 09:21:45.962288 1826875 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:21:45.965300 1826875 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:21:45.968223 1826875 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:21:45.968456 1826875 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:21:45.968483 1826875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1124 09:21:45.968491 1826875 cache.go:65] Caching tarball of preloaded images
	I1124 09:21:45.968572 1826875 preload.go:238] Found /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 09:21:45.968580 1826875 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:21:45.968700 1826875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/config.json ...
	I1124 09:21:45.989414 1826875 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:21:45.989424 1826875 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:21:45.989451 1826875 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:21:45.989481 1826875 start.go:360] acquireMachinesLock for functional-498341: {Name:mk92ae30192553f98cf7dcbc727ad92a239ba1db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:21:45.989546 1826875 start.go:364] duration metric: took 48.591µs to acquireMachinesLock for "functional-498341"
	I1124 09:21:45.989566 1826875 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:21:45.989570 1826875 fix.go:54] fixHost starting: 
	I1124 09:21:45.989832 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:21:46.010042 1826875 fix.go:112] recreateIfNeeded on functional-498341: state=Running err=<nil>
	W1124 09:21:46.010070 1826875 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:21:46.013569 1826875 out.go:252] * Updating the running docker "functional-498341" container ...
	I1124 09:21:46.013602 1826875 machine.go:94] provisionDockerMachine start ...
	I1124 09:21:46.013705 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:46.032544 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:46.032913 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:46.032921 1826875 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:21:46.189282 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-498341
	
	I1124 09:21:46.189296 1826875 ubuntu.go:182] provisioning hostname "functional-498341"
	I1124 09:21:46.189397 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:46.206535 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:46.206833 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:46.206841 1826875 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-498341 && echo "functional-498341" | sudo tee /etc/hostname
	I1124 09:21:46.366656 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-498341
	
	I1124 09:21:46.366740 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:46.385164 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:46.385458 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:46.385480 1826875 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-498341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-498341/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-498341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:21:46.537521 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:21:46.537536 1826875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:21:46.537558 1826875 ubuntu.go:190] setting up certificates
	I1124 09:21:46.537567 1826875 provision.go:84] configureAuth start
	I1124 09:21:46.537625 1826875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-498341
	I1124 09:21:46.556077 1826875 provision.go:143] copyHostCerts
	I1124 09:21:46.556148 1826875 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:21:46.556162 1826875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:21:46.556241 1826875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:21:46.556340 1826875 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:21:46.556344 1826875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:21:46.556369 1826875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:21:46.556417 1826875 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:21:46.556420 1826875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:21:46.556442 1826875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:21:46.556485 1826875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-498341 san=[127.0.0.1 192.168.49.2 functional-498341 localhost minikube]
	I1124 09:21:47.106083 1826875 provision.go:177] copyRemoteCerts
	I1124 09:21:47.106143 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:21:47.106180 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:47.124170 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:47.233216 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:21:47.250958 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:21:47.269786 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:21:47.287892 1826875 provision.go:87] duration metric: took 750.311721ms to configureAuth
	I1124 09:21:47.287909 1826875 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:21:47.288110 1826875 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:21:47.288209 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:47.306094 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:47.306454 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:47.306465 1826875 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:21:52.727122 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:21:52.727134 1826875 machine.go:97] duration metric: took 6.713525843s to provisionDockerMachine
	I1124 09:21:52.727145 1826875 start.go:293] postStartSetup for "functional-498341" (driver="docker")
	I1124 09:21:52.727155 1826875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:21:52.727217 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:21:52.727258 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:52.745490 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:52.849173 1826875 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:21:52.852715 1826875 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:21:52.852733 1826875 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:21:52.852742 1826875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:21:52.852796 1826875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:21:52.852869 1826875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:21:52.852954 1826875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:21:52.852997 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:21:52.860716 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:21:52.878886 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:21:52.896922 1826875 start.go:296] duration metric: took 169.76288ms for postStartSetup
	I1124 09:21:52.897014 1826875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:21:52.897052 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:52.914395 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:53.014704 1826875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:21:53.019679 1826875 fix.go:56] duration metric: took 7.030101194s for fixHost
	I1124 09:21:53.019696 1826875 start.go:83] releasing machines lock for "functional-498341", held for 7.030141999s
	I1124 09:21:53.019785 1826875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-498341
	I1124 09:21:53.037767 1826875 ssh_runner.go:195] Run: cat /version.json
	I1124 09:21:53.037808 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:53.037848 1826875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:21:53.037913 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:53.059451 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:53.063440 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:53.250372 1826875 ssh_runner.go:195] Run: systemctl --version
	I1124 09:21:53.257034 1826875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:21:53.293945 1826875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:21:53.298334 1826875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:21:53.298402 1826875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:21:53.306478 1826875 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:21:53.306492 1826875 start.go:496] detecting cgroup driver to use...
	I1124 09:21:53.306524 1826875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:21:53.306577 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:21:53.322766 1826875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:21:53.336145 1826875 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:21:53.336213 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:21:53.353503 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:21:53.367262 1826875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:21:53.501572 1826875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:21:53.650546 1826875 docker.go:234] disabling docker service ...
	I1124 09:21:53.650605 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:21:53.666421 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:21:53.679764 1826875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:21:53.824985 1826875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:21:53.973752 1826875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:21:53.987475 1826875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:21:54.006394 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:54.165352 1826875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:21:54.165418 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.174992 1826875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:21:54.175074 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.184118 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.193149 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.202210 1826875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:21:54.210661 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.219837 1826875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.229290 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.239225 1826875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:21:54.247116 1826875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:21:54.254543 1826875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:21:54.384822 1826875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:21:59.123618 1826875 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.738772424s)
	I1124 09:21:59.123633 1826875 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:21:59.123693 1826875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:21:59.127660 1826875 start.go:564] Will wait 60s for crictl version
	I1124 09:21:59.127714 1826875 ssh_runner.go:195] Run: which crictl
	I1124 09:21:59.131501 1826875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:21:59.160071 1826875 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:21:59.160144 1826875 ssh_runner.go:195] Run: crio --version
	I1124 09:21:59.188514 1826875 ssh_runner.go:195] Run: crio --version
	I1124 09:21:59.220805 1826875 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:21:59.223766 1826875 cli_runner.go:164] Run: docker network inspect functional-498341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:21:59.240048 1826875 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:21:59.247790 1826875 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 09:21:59.250720 1826875 kubeadm.go:884] updating cluster {Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:21:59.250946 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.415297 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.594588 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.788011 1826875 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:21:59.788155 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.935549 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:22:00.083592 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:22:00.371846 1826875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:22:00.498401 1826875 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:22:00.498413 1826875 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:22:00.498467 1826875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:22:00.591742 1826875 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:22:00.591755 1826875 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:22:00.591761 1826875 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.2 crio true true} ...
	I1124 09:22:00.591864 1826875 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-498341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:22:00.591942 1826875 ssh_runner.go:195] Run: crio config
	I1124 09:22:00.810572 1826875 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 09:22:00.811684 1826875 cni.go:84] Creating CNI manager for ""
	I1124 09:22:00.811697 1826875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:22:00.811712 1826875 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:22:00.811737 1826875 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-498341 NodeName:functional-498341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:22:00.811877 1826875 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-498341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:22:00.811951 1826875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:22:00.821153 1826875 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:22:00.821223 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:22:00.836226 1826875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 09:22:00.861148 1826875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:22:00.884841 1826875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1124 09:22:00.907124 1826875 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:22:00.915166 1826875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:22:01.173356 1826875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:22:01.192408 1826875 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341 for IP: 192.168.49.2
	I1124 09:22:01.192419 1826875 certs.go:195] generating shared ca certs ...
	I1124 09:22:01.192433 1826875 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:22:01.192585 1826875 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:22:01.192628 1826875 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:22:01.192634 1826875 certs.go:257] generating profile certs ...
	I1124 09:22:01.192716 1826875 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.key
	I1124 09:22:01.192764 1826875 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/apiserver.key.fe75fa91
	I1124 09:22:01.192803 1826875 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/proxy-client.key
	I1124 09:22:01.192928 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:22:01.192958 1826875 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:22:01.192966 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:22:01.192992 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:22:01.193018 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:22:01.193041 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:22:01.193084 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:22:01.193768 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:22:01.217843 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:22:01.245248 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:22:01.275020 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:22:01.305798 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:22:01.328708 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:22:01.359997 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:22:01.387613 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:22:01.440861 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:22:01.509658 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:22:01.544601 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:22:01.578782 1826875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:22:01.600041 1826875 ssh_runner.go:195] Run: openssl version
	I1124 09:22:01.608208 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:22:01.623861 1826875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:22:01.628355 1826875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:19 /usr/share/ca-certificates/18067042.pem
	I1124 09:22:01.628422 1826875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:22:01.686444 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:22:01.695877 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:22:01.710549 1826875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:22:01.715127 1826875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:22:01.715184 1826875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:22:01.781054 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:22:01.792601 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:22:01.806392 1826875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:22:01.813872 1826875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:19 /usr/share/ca-certificates/1806704.pem
	I1124 09:22:01.813933 1826875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:22:01.872953 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:22:01.888023 1826875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:22:01.896520 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:22:01.963293 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:22:02.022605 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:22:02.069864 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:22:02.123394 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:22:02.170814 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:22:02.214655 1826875 kubeadm.go:401] StartCluster: {Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:22:02.214747 1826875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:22:02.214813 1826875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:22:02.258925 1826875 cri.go:89] found id: "04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e"
	I1124 09:22:02.258944 1826875 cri.go:89] found id: "49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316"
	I1124 09:22:02.258951 1826875 cri.go:89] found id: "93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295"
	I1124 09:22:02.258954 1826875 cri.go:89] found id: "1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f"
	I1124 09:22:02.258957 1826875 cri.go:89] found id: "3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26"
	I1124 09:22:02.258959 1826875 cri.go:89] found id: "d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0"
	I1124 09:22:02.258964 1826875 cri.go:89] found id: "9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186"
	I1124 09:22:02.258966 1826875 cri.go:89] found id: "fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251"
	I1124 09:22:02.258972 1826875 cri.go:89] found id: "6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131"
	I1124 09:22:02.258981 1826875 cri.go:89] found id: "220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169"
	I1124 09:22:02.258986 1826875 cri.go:89] found id: "97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2"
	I1124 09:22:02.258988 1826875 cri.go:89] found id: "e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22"
	I1124 09:22:02.258994 1826875 cri.go:89] found id: "524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf"
	I1124 09:22:02.258996 1826875 cri.go:89] found id: "145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7"
	I1124 09:22:02.258998 1826875 cri.go:89] found id: ""
	I1124 09:22:02.259052 1826875 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:22:02.278648 1826875 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:22:02Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:22:02.278733 1826875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:22:02.290350 1826875 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:22:02.290359 1826875 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:22:02.290418 1826875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:22:02.301883 1826875 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:02.302457 1826875 kubeconfig.go:125] found "functional-498341" server: "https://192.168.49.2:8441"
	I1124 09:22:02.303938 1826875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:22:02.318445 1826875 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 09:20:03.100427242 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 09:22:00.901955469 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 09:22:02.318454 1826875 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:22:02.318475 1826875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:22:02.318541 1826875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:22:02.370948 1826875 cri.go:89] found id: "04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e"
	I1124 09:22:02.370959 1826875 cri.go:89] found id: "49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316"
	I1124 09:22:02.370962 1826875 cri.go:89] found id: "93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295"
	I1124 09:22:02.370965 1826875 cri.go:89] found id: "1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f"
	I1124 09:22:02.370967 1826875 cri.go:89] found id: "3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26"
	I1124 09:22:02.370970 1826875 cri.go:89] found id: "d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0"
	I1124 09:22:02.370976 1826875 cri.go:89] found id: "9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186"
	I1124 09:22:02.370980 1826875 cri.go:89] found id: "fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251"
	I1124 09:22:02.370982 1826875 cri.go:89] found id: "6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131"
	I1124 09:22:02.370987 1826875 cri.go:89] found id: "220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169"
	I1124 09:22:02.370990 1826875 cri.go:89] found id: "97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2"
	I1124 09:22:02.371001 1826875 cri.go:89] found id: "e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22"
	I1124 09:22:02.371003 1826875 cri.go:89] found id: "524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf"
	I1124 09:22:02.371005 1826875 cri.go:89] found id: "145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7"
	I1124 09:22:02.371007 1826875 cri.go:89] found id: ""
	I1124 09:22:02.371012 1826875 cri.go:252] Stopping containers: [04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e 49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316 93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295 1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f 3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26 d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0 9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186 fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251 6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131 220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169 97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2 e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22 524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf 145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7]
	I1124 09:22:02.371077 1826875 ssh_runner.go:195] Run: which crictl
	I1124 09:22:02.377716 1826875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e 49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316 93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295 1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f 3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26 d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0 9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186 fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251 6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131 220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169 97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2 e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22 524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf 145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7
	I1124 09:22:23.246006 1826875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e 49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316 93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295 1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f 3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26 d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0 9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186 fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251 6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131 220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169 97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2 e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22 524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf 145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7:
(20.868253035s)
	I1124 09:22:23.246079 1826875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:22:23.369062 1826875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:22:23.377493 1826875 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 09:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 09:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 24 09:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov 24 09:20 /etc/kubernetes/scheduler.conf
	
	I1124 09:22:23.377557 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:22:23.385832 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:22:23.394551 1826875 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:23.394608 1826875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:22:23.402578 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:22:23.410561 1826875 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:23.410618 1826875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:22:23.418230 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:22:23.426123 1826875 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:23.426202 1826875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:22:23.433956 1826875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:22:23.442317 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:23.489312 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:25.710006 1826875 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.220667297s)
	I1124 09:22:25.710087 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:25.939124 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:26.003887 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:26.066592 1826875 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:22:26.066663 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:26.567640 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:27.066789 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:27.082983 1826875 api_server.go:72] duration metric: took 1.016391114s to wait for apiserver process to appear ...
	I1124 09:22:27.082998 1826875 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:22:27.083015 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:30.737047 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:22:30.737062 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:22:30.737074 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:30.792407 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:22:30.792422 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:22:31.083876 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:31.092063 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:22:31.092079 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:22:31.583679 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:31.591692 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:22:31.591707 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:22:32.083304 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:32.091333 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1124 09:22:32.105031 1826875 api_server.go:141] control plane version: v1.34.2
	I1124 09:22:32.105049 1826875 api_server.go:131] duration metric: took 5.022045671s to wait for apiserver health ...
	I1124 09:22:32.105057 1826875 cni.go:84] Creating CNI manager for ""
	I1124 09:22:32.105065 1826875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:22:32.108297 1826875 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:22:32.111333 1826875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:22:32.120899 1826875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:22:32.120910 1826875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:22:32.135202 1826875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:22:32.588500 1826875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:22:32.592036 1826875 system_pods.go:59] 8 kube-system pods found
	I1124 09:22:32.592054 1826875 system_pods.go:61] "coredns-66bc5c9577-vfd2t" [d8b73e41-010c-4712-933f-6ca47fb26a6a] Running
	I1124 09:22:32.592062 1826875 system_pods.go:61] "etcd-functional-498341" [06ec2eb1-a0ff-4f32-8f39-031eb5592563] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:22:32.592066 1826875 system_pods.go:61] "kindnet-dxrpc" [ac0a9329-4003-4328-9896-e4fbc9ec36bc] Running
	I1124 09:22:32.592071 1826875 system_pods.go:61] "kube-apiserver-functional-498341" [53f00290-216f-48af-8fe6-1806a95ebee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:22:32.592076 1826875 system_pods.go:61] "kube-controller-manager-functional-498341" [dbaaefa2-eeae-425b-9ce7-732b2602ad3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:22:32.592080 1826875 system_pods.go:61] "kube-proxy-4n9vx" [70c6582b-4573-4300-9bf8-c45ba36f762a] Running
	I1124 09:22:32.592084 1826875 system_pods.go:61] "kube-scheduler-functional-498341" [6f930e44-5f27-4bec-a0ac-bfe6bbc2838b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:22:32.592087 1826875 system_pods.go:61] "storage-provisioner" [4a8afbbb-2929-4192-a5d1-4e6257b297ae] Running
	I1124 09:22:32.592092 1826875 system_pods.go:74] duration metric: took 3.581782ms to wait for pod list to return data ...
	I1124 09:22:32.592098 1826875 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:22:32.595047 1826875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 09:22:32.595066 1826875 node_conditions.go:123] node cpu capacity is 2
	I1124 09:22:32.595076 1826875 node_conditions.go:105] duration metric: took 2.974455ms to run NodePressure ...
	I1124 09:22:32.595137 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:32.849781 1826875 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 09:22:32.853526 1826875 kubeadm.go:744] kubelet initialised
	I1124 09:22:32.853537 1826875 kubeadm.go:745] duration metric: took 3.742809ms waiting for restarted kubelet to initialise ...
	I1124 09:22:32.853550 1826875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:22:32.863017 1826875 ops.go:34] apiserver oom_adj: -16
	I1124 09:22:32.863046 1826875 kubeadm.go:602] duration metric: took 30.572682401s to restartPrimaryControlPlane
	I1124 09:22:32.863055 1826875 kubeadm.go:403] duration metric: took 30.648420719s to StartCluster
	I1124 09:22:32.863069 1826875 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:22:32.863153 1826875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:22:32.863862 1826875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:22:32.864137 1826875 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:22:32.864365 1826875 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:22:32.864493 1826875 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:22:32.864552 1826875 addons.go:70] Setting storage-provisioner=true in profile "functional-498341"
	I1124 09:22:32.864564 1826875 addons.go:239] Setting addon storage-provisioner=true in "functional-498341"
	W1124 09:22:32.864580 1826875 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:22:32.864601 1826875 host.go:66] Checking if "functional-498341" exists ...
	I1124 09:22:32.865033 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:22:32.865550 1826875 addons.go:70] Setting default-storageclass=true in profile "functional-498341"
	I1124 09:22:32.865564 1826875 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-498341"
	I1124 09:22:32.865829 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:22:32.867573 1826875 out.go:179] * Verifying Kubernetes components...
	I1124 09:22:32.871342 1826875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:22:32.904200 1826875 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:22:32.907448 1826875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:22:32.907462 1826875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:22:32.907533 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:22:32.916161 1826875 addons.go:239] Setting addon default-storageclass=true in "functional-498341"
	W1124 09:22:32.916172 1826875 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:22:32.916194 1826875 host.go:66] Checking if "functional-498341" exists ...
	I1124 09:22:32.916636 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:22:32.943283 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:22:32.962864 1826875 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:22:32.962878 1826875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:22:32.962944 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:22:33.000070 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:22:33.112433 1826875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:22:33.158550 1826875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:22:33.176684 1826875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:22:33.916758 1826875 node_ready.go:35] waiting up to 6m0s for node "functional-498341" to be "Ready" ...
	I1124 09:22:33.919958 1826875 node_ready.go:49] node "functional-498341" is "Ready"
	I1124 09:22:33.919974 1826875 node_ready.go:38] duration metric: took 3.198351ms for node "functional-498341" to be "Ready" ...
	I1124 09:22:33.919985 1826875 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:22:33.920042 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:33.928015 1826875 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:22:33.930895 1826875 addons.go:530] duration metric: took 1.066478613s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:22:33.934093 1826875 api_server.go:72] duration metric: took 1.069924737s to wait for apiserver process to appear ...
	I1124 09:22:33.934109 1826875 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:22:33.934127 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:33.943288 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1124 09:22:33.944302 1826875 api_server.go:141] control plane version: v1.34.2
	I1124 09:22:33.944316 1826875 api_server.go:131] duration metric: took 10.2017ms to wait for apiserver health ...
	I1124 09:22:33.944323 1826875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:22:33.947266 1826875 system_pods.go:59] 8 kube-system pods found
	I1124 09:22:33.947280 1826875 system_pods.go:61] "coredns-66bc5c9577-vfd2t" [d8b73e41-010c-4712-933f-6ca47fb26a6a] Running
	I1124 09:22:33.947310 1826875 system_pods.go:61] "etcd-functional-498341" [06ec2eb1-a0ff-4f32-8f39-031eb5592563] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:22:33.947313 1826875 system_pods.go:61] "kindnet-dxrpc" [ac0a9329-4003-4328-9896-e4fbc9ec36bc] Running
	I1124 09:22:33.947319 1826875 system_pods.go:61] "kube-apiserver-functional-498341" [53f00290-216f-48af-8fe6-1806a95ebee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:22:33.947324 1826875 system_pods.go:61] "kube-controller-manager-functional-498341" [dbaaefa2-eeae-425b-9ce7-732b2602ad3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:22:33.947327 1826875 system_pods.go:61] "kube-proxy-4n9vx" [70c6582b-4573-4300-9bf8-c45ba36f762a] Running
	I1124 09:22:33.947332 1826875 system_pods.go:61] "kube-scheduler-functional-498341" [6f930e44-5f27-4bec-a0ac-bfe6bbc2838b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:22:33.947345 1826875 system_pods.go:61] "storage-provisioner" [4a8afbbb-2929-4192-a5d1-4e6257b297ae] Running
	I1124 09:22:33.947349 1826875 system_pods.go:74] duration metric: took 3.021496ms to wait for pod list to return data ...
	I1124 09:22:33.947355 1826875 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:22:33.949471 1826875 default_sa.go:45] found service account: "default"
	I1124 09:22:33.949482 1826875 default_sa.go:55] duration metric: took 2.12358ms for default service account to be created ...
	I1124 09:22:33.949489 1826875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:22:33.952257 1826875 system_pods.go:86] 8 kube-system pods found
	I1124 09:22:33.952271 1826875 system_pods.go:89] "coredns-66bc5c9577-vfd2t" [d8b73e41-010c-4712-933f-6ca47fb26a6a] Running
	I1124 09:22:33.952280 1826875 system_pods.go:89] "etcd-functional-498341" [06ec2eb1-a0ff-4f32-8f39-031eb5592563] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:22:33.952289 1826875 system_pods.go:89] "kindnet-dxrpc" [ac0a9329-4003-4328-9896-e4fbc9ec36bc] Running
	I1124 09:22:33.952295 1826875 system_pods.go:89] "kube-apiserver-functional-498341" [53f00290-216f-48af-8fe6-1806a95ebee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:22:33.952300 1826875 system_pods.go:89] "kube-controller-manager-functional-498341" [dbaaefa2-eeae-425b-9ce7-732b2602ad3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:22:33.952305 1826875 system_pods.go:89] "kube-proxy-4n9vx" [70c6582b-4573-4300-9bf8-c45ba36f762a] Running
	I1124 09:22:33.952310 1826875 system_pods.go:89] "kube-scheduler-functional-498341" [6f930e44-5f27-4bec-a0ac-bfe6bbc2838b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:22:33.952313 1826875 system_pods.go:89] "storage-provisioner" [4a8afbbb-2929-4192-a5d1-4e6257b297ae] Running
	I1124 09:22:33.952319 1826875 system_pods.go:126] duration metric: took 2.825546ms to wait for k8s-apps to be running ...
	I1124 09:22:33.952326 1826875 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:22:33.952383 1826875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:22:33.965442 1826875 system_svc.go:56] duration metric: took 13.107312ms WaitForService to wait for kubelet
	I1124 09:22:33.965460 1826875 kubeadm.go:587] duration metric: took 1.101300429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:22:33.965475 1826875 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:22:33.968056 1826875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 09:22:33.968083 1826875 node_conditions.go:123] node cpu capacity is 2
	I1124 09:22:33.968092 1826875 node_conditions.go:105] duration metric: took 2.61317ms to run NodePressure ...
	I1124 09:22:33.968104 1826875 start.go:242] waiting for startup goroutines ...
	I1124 09:22:33.968110 1826875 start.go:247] waiting for cluster config update ...
	I1124 09:22:33.968120 1826875 start.go:256] writing updated cluster config ...
	I1124 09:22:33.968421 1826875 ssh_runner.go:195] Run: rm -f paused
	I1124 09:22:33.972146 1826875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:33.975417 1826875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfd2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:33.980466 1826875 pod_ready.go:94] pod "coredns-66bc5c9577-vfd2t" is "Ready"
	I1124 09:22:33.980480 1826875 pod_ready.go:86] duration metric: took 5.047737ms for pod "coredns-66bc5c9577-vfd2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:33.982799 1826875 pod_ready.go:83] waiting for pod "etcd-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:22:35.988686 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	W1124 09:22:38.487737 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	W1124 09:22:40.488247 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	W1124 09:22:42.488552 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	I1124 09:22:43.488334 1826875 pod_ready.go:94] pod "etcd-functional-498341" is "Ready"
	I1124 09:22:43.488348 1826875 pod_ready.go:86] duration metric: took 9.50553522s for pod "etcd-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.490851 1826875 pod_ready.go:83] waiting for pod "kube-apiserver-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.495851 1826875 pod_ready.go:94] pod "kube-apiserver-functional-498341" is "Ready"
	I1124 09:22:43.495866 1826875 pod_ready.go:86] duration metric: took 5.002371ms for pod "kube-apiserver-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.498483 1826875 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.504014 1826875 pod_ready.go:94] pod "kube-controller-manager-functional-498341" is "Ready"
	I1124 09:22:43.504029 1826875 pod_ready.go:86] duration metric: took 5.534135ms for pod "kube-controller-manager-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.506800 1826875 pod_ready.go:83] waiting for pod "kube-proxy-4n9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.686748 1826875 pod_ready.go:94] pod "kube-proxy-4n9vx" is "Ready"
	I1124 09:22:43.686763 1826875 pod_ready.go:86] duration metric: took 179.949496ms for pod "kube-proxy-4n9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.886430 1826875 pod_ready.go:83] waiting for pod "kube-scheduler-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:45.086911 1826875 pod_ready.go:94] pod "kube-scheduler-functional-498341" is "Ready"
	I1124 09:22:45.086928 1826875 pod_ready.go:86] duration metric: took 1.200482863s for pod "kube-scheduler-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:45.086940 1826875 pod_ready.go:40] duration metric: took 11.114770911s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:45.182248 1826875 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1124 09:22:45.188854 1826875 out.go:179] * Done! kubectl is now configured to use "functional-498341" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:25:34 functional-498341 crio[3532]: time="2025-11-24T09:25:34.944199784Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a497d6b1-e2a4-42e0-920f-38d50f667b6b name=/runtime.v1.ImageService/PullImage
	Nov 24 09:26:00 functional-498341 crio[3532]: time="2025-11-24T09:26:00.111263823Z" level=info msg="Pulling image: docker.io/nginx:latest" id=7e3bda8a-92cc-4c64-96bb-7d6186b11c46 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:26:00 functional-498341 crio[3532]: time="2025-11-24T09:26:00.160567877Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:26:30 functional-498341 crio[3532]: time="2025-11-24T09:26:30.468768508Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:27:00 functional-498341 crio[3532]: time="2025-11-24T09:27:00.771150273Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2936a903-894d-440a-ac99-7fef05fdfc62 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.064874611Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-t27wr/POD" id=a8dc3577-0439-4426-84dd-52854b94f5ad name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.06496029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.073814878Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-t27wr Namespace:default ID:5cc5e3f0bd663229d44d4130d5b8b86fd3bf03c9170b33519ef0a4ce911a7e10 UID:8b2860cf-9293-4539-9b71-9d07bea924d9 NetNS:/var/run/netns/dbf0a5cc-68b2-4941-b180-48fd56743488 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002404230}] Aliases:map[]}"
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.073873201Z" level=info msg="Adding pod default_hello-node-75c85bcc94-t27wr to CNI network \"kindnet\" (type=ptp)"
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.0858433Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-t27wr Namespace:default ID:5cc5e3f0bd663229d44d4130d5b8b86fd3bf03c9170b33519ef0a4ce911a7e10 UID:8b2860cf-9293-4539-9b71-9d07bea924d9 NetNS:/var/run/netns/dbf0a5cc-68b2-4941-b180-48fd56743488 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002404230}] Aliases:map[]}"
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.086055668Z" level=info msg="Checking pod default_hello-node-75c85bcc94-t27wr for CNI network kindnet (type=ptp)"
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.092476427Z" level=info msg="Ran pod sandbox 5cc5e3f0bd663229d44d4130d5b8b86fd3bf03c9170b33519ef0a4ce911a7e10 with infra container: default/hello-node-75c85bcc94-t27wr/POD" id=a8dc3577-0439-4426-84dd-52854b94f5ad name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 09:27:18 functional-498341 crio[3532]: time="2025-11-24T09:27:18.094057215Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=75047d04-50c1-47bb-964c-c9cce0685d78 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:27:34 functional-498341 crio[3532]: time="2025-11-24T09:27:34.11034198Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fe7fb6b0-ef8e-4f8e-9dc6-f46d303c5781 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:27:41 functional-498341 crio[3532]: time="2025-11-24T09:27:41.110084144Z" level=info msg="Pulling image: docker.io/nginx:latest" id=5cc4c0ce-dcfb-4160-a194-8a1536267afb name=/runtime.v1.ImageService/PullImage
	Nov 24 09:27:41 functional-498341 crio[3532]: time="2025-11-24T09:27:41.112329645Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:28:11 functional-498341 crio[3532]: time="2025-11-24T09:28:11.442367775Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:28:41 functional-498341 crio[3532]: time="2025-11-24T09:28:41.699186608Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5e3d5c84-054b-4874-8fd2-44ee6aaa86ed name=/runtime.v1.ImageService/PullImage
	Nov 24 09:28:41 functional-498341 crio[3532]: time="2025-11-24T09:28:41.700025782Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=53e356c2-2c40-425a-9efd-d1662184fbbf name=/runtime.v1.ImageService/PullImage
	Nov 24 09:29:36 functional-498341 crio[3532]: time="2025-11-24T09:29:36.110159981Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cc0ee32b-9f46-44c6-94f7-c2b1afec7cfb name=/runtime.v1.ImageService/PullImage
	Nov 24 09:30:09 functional-498341 crio[3532]: time="2025-11-24T09:30:09.10998066Z" level=info msg="Pulling image: docker.io/nginx:latest" id=0600bdee-68ec-4cb2-8f0b-fb4fe0eebd84 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:30:09 functional-498341 crio[3532]: time="2025-11-24T09:30:09.115840141Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:30:39 functional-498341 crio[3532]: time="2025-11-24T09:30:39.459084309Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:31:09 functional-498341 crio[3532]: time="2025-11-24T09:31:09.743522495Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d73abe61-c385-4a35-8355-8e447367d5c9 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:31:24 functional-498341 crio[3532]: time="2025-11-24T09:31:24.111012537Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8368370e-93bf-40b4-b12b-d95cf5fa347a name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c637ec76ab83c       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   63cf0c579e198       nginx-svc                                   default
	1fb0c9b9a85a0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       4                   feaeb05d97102       storage-provisioner                         kube-system
	71b403f584411       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  10 minutes ago      Running             kube-proxy                3                   191e021fbd5e9       kube-proxy-4n9vx                            kube-system
	a891e2cd44b94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   4a18924a512d0       kindnet-dxrpc                               kube-system
	15e71e99b984a       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                  10 minutes ago      Running             kube-apiserver            0                   d52339a0af995       kube-apiserver-functional-498341            kube-system
	baa104dad4c40       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  10 minutes ago      Running             kube-scheduler            3                   b10cd6e774a9f       kube-scheduler-functional-498341            kube-system
	c108a7442e642       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  10 minutes ago      Running             kube-controller-manager   3                   74279f5069235       kube-controller-manager-functional-498341   kube-system
	9aa825831f876       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  10 minutes ago      Running             etcd                      3                   56ec423187fee       etcd-functional-498341                      kube-system
	559498b49775e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       3                   feaeb05d97102       storage-provisioner                         kube-system
	4f74f3fa64dec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   edc04b4c96ff0       coredns-66bc5c9577-vfd2t                    kube-system
	04dc6d3814bef       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  11 minutes ago      Exited              kube-controller-manager   2                   74279f5069235       kube-controller-manager-functional-498341   kube-system
	49717583e9f2f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  11 minutes ago      Exited              etcd                      2                   56ec423187fee       etcd-functional-498341                      kube-system
	93d44c5402102       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  11 minutes ago      Exited              kube-scheduler            2                   b10cd6e774a9f       kube-scheduler-functional-498341            kube-system
	3e05c09486ff2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  11 minutes ago      Exited              kube-proxy                2                   191e021fbd5e9       kube-proxy-4n9vx                            kube-system
	d917fc755b360       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   4a18924a512d0       kindnet-dxrpc                               kube-system
	145da221b5b69       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   edc04b4c96ff0       coredns-66bc5c9577-vfd2t                    kube-system
	
	
	==> coredns [145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41033 - 41467 "HINFO IN 7622388333576306592.7037694830251736267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031822616s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4f74f3fa64dec4ab5760c54d4c13bd86a207e5012bffa99ac8d9fa91691713d5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36179 - 13838 "HINFO IN 7540107475593800547.8826876563962152014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057152215s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41354->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41364->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41374->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-498341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-498341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=functional-498341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_20_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:20:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-498341
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:33:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:32:32 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:32:32 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:32:32 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:32:32 +0000   Mon, 24 Nov 2025 09:21:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-498341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                b19cc9fb-383b-4269-9c57-72146af388e0
	  Boot ID:                    27a92f9c-55a4-4798-92be-317cdb891088
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-t27wr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  default                     hello-node-connect-7d85dfc575-ktl8q          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-vfd2t                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-498341                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-dxrpc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-498341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-498341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4n9vx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-498341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-498341 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316] <==
	{"level":"info","ts":"2025-11-24T09:22:01.780383Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T09:22:01.781185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:22:01.782015Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:22:01.784941Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-24T09:22:01.797885Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-24T09:22:01.798067Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:22:01.800939Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-11-24T09:22:02.538050Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T09:22:02.538155Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-498341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T09:22:02.538299Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T09:22:02.538404Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T09:22:02.541194Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.541338Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T09:22:02.541443Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T09:22:02.541493Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541743Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541816Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T09:22:02.541856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541928Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541963Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T09:22:02.541994Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.549914Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T09:22:02.550079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.550142Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T09:22:02.550190Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-498341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [9aa825831f876fd8076d516a591bb4a899307d3383d1d114c317d0483577d5e2] <==
	{"level":"warn","ts":"2025-11-24T09:22:29.501897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.531688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.557002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.593633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.617017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.662247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.693042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.726777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.745495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.766147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.779633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.801975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.813601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.859384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.877233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.893999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.917660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.936429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.969446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.985823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.999509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:30.089235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T09:32:28.297315Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2025-11-24T09:32:28.320773Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1071,"took":"23.13167ms","hash":2743139146,"current-db-size-bytes":3289088,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-24T09:32:28.320845Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2743139146,"revision":1071,"compact-revision":-1}
	
	
	==> kernel <==
	 09:33:05 up  8:15,  0 user,  load average: 0.27, 0.44, 1.46
	Linux functional-498341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a891e2cd44b943fcb0b33577c5e1ba116b71c5708ee7e684e46226d679200d3e] <==
	I1124 09:31:01.708134       1 main.go:301] handling current node
	I1124 09:31:11.708070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:31:11.708105       1 main.go:301] handling current node
	I1124 09:31:21.708077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:31:21.708182       1 main.go:301] handling current node
	I1124 09:31:31.708344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:31:31.708478       1 main.go:301] handling current node
	I1124 09:31:41.708045       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:31:41.708082       1 main.go:301] handling current node
	I1124 09:31:51.708107       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:31:51.708142       1 main.go:301] handling current node
	I1124 09:32:01.715103       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:32:01.715245       1 main.go:301] handling current node
	I1124 09:32:11.708436       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:32:11.708467       1 main.go:301] handling current node
	I1124 09:32:21.707860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:32:21.707893       1 main.go:301] handling current node
	I1124 09:32:31.708675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:32:31.708780       1 main.go:301] handling current node
	I1124 09:32:41.708420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:32:41.708457       1 main.go:301] handling current node
	I1124 09:32:51.714344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:32:51.714448       1 main.go:301] handling current node
	I1124 09:33:01.713559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:33:01.713595       1 main.go:301] handling current node
	
	
	==> kindnet [d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0] <==
	I1124 09:22:00.524698       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:22:00.524978       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 09:22:00.525163       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:22:00.525178       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:22:00.525193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:22:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:22:00.843833       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:22:00.843949       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:22:00.843993       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:22:00.844168       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 09:22:10.844778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:22:10.845799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 09:22:10.846053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:22:10.852198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 09:22:21.928399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 09:22:22.048527       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:22:22.272677       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:22:22.402635       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	
	
	==> kube-apiserver [15e71e99b984ad56351b668dea7807b14fb8676c4c2532e7c2ef16079ae69280] <==
	I1124 09:22:30.895694       1 aggregator.go:171] initial CRD sync complete...
	I1124 09:22:30.895724       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:22:30.895751       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:22:30.895777       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:22:30.904992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:22:30.905062       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 09:22:30.910665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:22:30.914166       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1124 09:22:30.915452       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:22:30.917713       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 09:22:31.120722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:22:31.704410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:22:32.581570       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:22:32.700466       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:22:32.771113       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:22:32.778890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:22:36.293726       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:22:36.297775       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:22:36.299633       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:22:48.647984       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.152.182"}
	I1124 09:22:54.729238       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.117.210"}
	I1124 09:23:03.397458       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.131.230"}
	E1124 09:23:13.428237       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50192: use of closed network connection
	I1124 09:27:17.840622       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.65.110"}
	I1124 09:32:30.820252       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e] <==
	
	
	==> kube-controller-manager [c108a7442e642f500cad5954b3fface6603225ecb02334b8443c670f0ef39abc] <==
	I1124 09:22:34.212037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 09:22:34.212065       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 09:22:34.212096       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 09:22:34.214371       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:22:34.214946       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:22:34.216039       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:34.216070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:22:34.216168       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:22:34.221500       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 09:22:34.222707       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:22:34.223836       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 09:22:34.251290       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:22:34.252574       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:22:34.255166       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 09:22:34.256340       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:22:34.256393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:22:34.256439       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:22:34.257650       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:22:34.257654       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:22:34.258886       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:22:34.261228       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:22:34.261258       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:22:34.262384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:22:34.267714       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:34.270102       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26] <==
	I1124 09:22:01.980453       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:02.484340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1124 09:22:12.590067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-498341&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [71b403f5844112bd1e54c4ac1415199069711a4ca59aeb173507308c18b0aa8d] <==
	I1124 09:22:31.472338       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:31.566912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:22:31.667564       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:22:31.667606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 09:22:31.667698       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:22:31.687130       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:22:31.687187       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:22:31.691183       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:22:31.691508       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:22:31.691534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:31.694760       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:22:31.694838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:22:31.695221       1 config.go:200] "Starting service config controller"
	I1124 09:22:31.695280       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:22:31.695628       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:22:31.695698       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:22:31.696187       1 config.go:309] "Starting node config controller"
	I1124 09:22:31.696252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:22:31.696283       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:22:31.795487       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:22:31.795558       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:22:31.795818       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295] <==
	I1124 09:22:02.766884       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [baa104dad4c402409f627a01e3f9b0455ab0b1a3b1f384be692c3db9bf5b6e79] <==
	I1124 09:22:27.117381       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:22:30.792631       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:22:30.793242       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:22:30.793313       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:22:30.793345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:22:30.835359       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 09:22:30.835471       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:30.838156       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:22:30.838206       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:22:30.839079       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:22:30.839608       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:22:30.939283       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:31:24 functional-498341 kubelet[4036]: E1124 09:31:24.111664    4036 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-ktl8q_default(ab1e6451-329d-49eb-83f1-7cc1b00f3e21): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Nov 24 09:31:24 functional-498341 kubelet[4036]: E1124 09:31:24.111690    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:31:31 functional-498341 kubelet[4036]: E1124 09:31:31.109503    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:31:36 functional-498341 kubelet[4036]: E1124 09:31:36.110286    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:31:36 functional-498341 kubelet[4036]: E1124 09:31:36.111145    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:31:43 functional-498341 kubelet[4036]: E1124 09:31:43.109567    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:31:48 functional-498341 kubelet[4036]: E1124 09:31:48.109838    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:31:51 functional-498341 kubelet[4036]: E1124 09:31:51.109368    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:31:58 functional-498341 kubelet[4036]: E1124 09:31:58.110093    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:32:02 functional-498341 kubelet[4036]: E1124 09:32:02.110335    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:32:03 functional-498341 kubelet[4036]: E1124 09:32:03.109991    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:32:13 functional-498341 kubelet[4036]: E1124 09:32:13.109778    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:32:14 functional-498341 kubelet[4036]: E1124 09:32:14.109560    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:32:16 functional-498341 kubelet[4036]: E1124 09:32:16.110053    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:32:26 functional-498341 kubelet[4036]: E1124 09:32:26.110050    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:32:26 functional-498341 kubelet[4036]: E1124 09:32:26.110715    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:32:30 functional-498341 kubelet[4036]: E1124 09:32:30.110314    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:32:37 functional-498341 kubelet[4036]: E1124 09:32:37.109735    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:32:39 functional-498341 kubelet[4036]: E1124 09:32:39.109633    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:32:44 functional-498341 kubelet[4036]: E1124 09:32:44.109528    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:32:51 functional-498341 kubelet[4036]: E1124 09:32:51.110082    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:32:51 functional-498341 kubelet[4036]: E1124 09:32:51.110082    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:32:55 functional-498341 kubelet[4036]: E1124 09:32:55.110090    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:33:02 functional-498341 kubelet[4036]: E1124 09:33:02.110490    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-t27wr" podUID="8b2860cf-9293-4539-9b71-9d07bea924d9"
	Nov 24 09:33:04 functional-498341 kubelet[4036]: E1124 09:33:04.109395    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	
	
	==> storage-provisioner [1fb0c9b9a85a0fec8a1ab2c37119c62c6681f8e5e630a9272f50a23e10b7fd9a] <==
	W1124 09:32:41.693575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:43.696188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:43.700633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:45.703704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:45.710057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:47.712670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:47.716941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:49.720194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:49.724846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:51.728250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:51.732467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:53.735956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:53.742630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:55.746597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:55.750858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:57.753721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:57.760396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:59.762954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:32:59.767051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:33:01.770127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:33:01.774508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:33:03.777918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:33:03.782745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:33:05.786756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:33:05.793673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [559498b49775e56118c49fa50a90d10b8e09907d7e647d35eb62a47bc1b3323c] <==
	I1124 09:22:13.582872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:22:23.886089       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-498341 -n functional-498341
helpers_test.go:269: (dbg) Run:  kubectl --context functional-498341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-t27wr hello-node-connect-7d85dfc575-ktl8q sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-498341 describe pod hello-node-75c85bcc94-t27wr hello-node-connect-7d85dfc575-ktl8q sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-498341 describe pod hello-node-75c85bcc94-t27wr hello-node-connect-7d85dfc575-ktl8q sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-t27wr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:27:17 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvjpt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vvjpt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m49s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-t27wr to functional-498341
	  Normal   Pulling    2m7s (x5 over 5m48s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     117s (x5 over 5m48s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     117s (x5 over 5m48s)  kubelet            Error: ErrImagePull
	  Warning  Failed     40s (x16 over 5m47s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x19 over 5m47s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ktl8q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:23:03 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dch74 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dch74:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ktl8q to functional-498341
	  Normal   Pulling    4m33s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4m25s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     4m25s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m21s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m19s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:23:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s4942 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-s4942:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m52s                  default-scheduler  Successfully assigned default/sp-pod to functional-498341
	  Warning  Failed     7m32s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m6s                   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m57s (x5 over 9m52s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     117s (x3 over 8m51s)   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     117s (x5 over 8m51s)   kubelet            Error: ErrImagePull
	  Warning  Failed     40s (x16 over 8m50s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x19 over 8m50s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.62s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (262.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4a8afbbb-2929-4192-a5d1-4e6257b297ae] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003773082s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-498341 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-498341 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-498341 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-498341 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a4149733-0085-4895-90dc-7214fda89b70] Pending
helpers_test.go:352: "sp-pod" [a4149733-0085-4895-90dc-7214fda89b70] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a4149733-0085-4895-90dc-7214fda89b70] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003542908s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-498341 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-498341 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-498341 delete -f testdata/storage-provisioner/pod.yaml: (1.02139957s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-498341 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6acc07e2-d1b3-45c3-bff6-9989cf802917] Pending
helpers_test.go:352: "sp-pod" [6acc07e2-d1b3-45c3-bff6-9989cf802917] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1124 09:23:20.717730 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:25:36.849585 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:26:04.559488 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-498341 -n functional-498341
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-24 09:27:14.97543981 +0000 UTC m=+887.555272115
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-498341 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-498341 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-498341/192.168.49.2
Start Time:       Mon, 24 Nov 2025 09:23:14 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s4942 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-s4942:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-498341
Warning  Failed     3m                   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     101s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    88s (x2 over 2m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     88s (x2 over 2m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    75s (x3 over 4m1s)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     15s (x3 over 3m)     kubelet            Error: ErrImagePull
Warning  Failed     15s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-498341 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-498341 logs sp-pod -n default: exit status 1 (111.86486ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-498341 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-498341
helpers_test.go:243: (dbg) docker inspect functional-498341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f",
	        "Created": "2025-11-24T09:19:55.998787995Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1822668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:19:56.073167874Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/hosts",
	        "LogPath": "/var/lib/docker/containers/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f/6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f-json.log",
	        "Name": "/functional-498341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-498341:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-498341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c463f059d60044f1dc3699524c1f9a96c969669da6578d658707454ef8dc08f",
	                "LowerDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/merged",
	                "UpperDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/diff",
	                "WorkDir": "/var/lib/docker/overlay2/99f08ca05f85c22cbb955c3277b34b9c5c126de50e916fc0d0f5febfb71a4758/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-498341",
	                "Source": "/var/lib/docker/volumes/functional-498341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-498341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-498341",
	                "name.minikube.sigs.k8s.io": "functional-498341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "240a4dd1813daa01947671dc124987da45cf1671d25b32d2f3891003a67f2fe7",
	            "SandboxKey": "/var/run/docker/netns/240a4dd1813d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35000"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35001"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35004"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35002"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35003"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-498341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:13:66:1a:33:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c1abece26d5d0dbba2a759db10fd2d39adcd67e3473b07c793acf2b30828945",
	                    "EndpointID": "08ec2567ced7c59a5794dec53e18552f77fb769f748c21d66ed8b80f607753ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-498341",
	                        "6c463f059d60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-498341 -n functional-498341
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 logs -n 25: (1.491215927s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-498341 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ kubectl │ functional-498341 kubectl -- --context functional-498341 get pods                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:21 UTC │
	│ start   │ -p functional-498341 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:21 UTC │ 24 Nov 25 09:22 UTC │
	│ service │ invalid-svc -p functional-498341                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ config  │ functional-498341 config unset cpus                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ cp      │ functional-498341 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config get cpus                                                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ config  │ functional-498341 config set cpus 2                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config get cpus                                                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config unset cpus                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh -n functional-498341 sudo cat /home/docker/cp-test.txt                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ config  │ functional-498341 config get cpus                                                                                          │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ ssh     │ functional-498341 ssh echo hello                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ cp      │ functional-498341 cp functional-498341:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2522289623/001/cp-test.txt │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh cat /etc/hostname                                                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh -n functional-498341 sudo cat /home/docker/cp-test.txt                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ tunnel  │ functional-498341 tunnel --alsologtostderr                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ tunnel  │ functional-498341 tunnel --alsologtostderr                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ cp      │ functional-498341 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ ssh     │ functional-498341 ssh -n functional-498341 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │ 24 Nov 25 09:22 UTC │
	│ tunnel  │ functional-498341 tunnel --alsologtostderr                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:22 UTC │                     │
	│ addons  │ functional-498341 addons list                                                                                              │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	│ addons  │ functional-498341 addons list -o json                                                                                      │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:23 UTC │ 24 Nov 25 09:23 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:21:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:21:45.751921 1826875 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:21:45.752047 1826875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:21:45.752051 1826875 out.go:374] Setting ErrFile to fd 2...
	I1124 09:21:45.752054 1826875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:21:45.752324 1826875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:21:45.752672 1826875 out.go:368] Setting JSON to false
	I1124 09:21:45.753610 1826875 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29056,"bootTime":1763947050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:21:45.753677 1826875 start.go:143] virtualization:  
	I1124 09:21:45.757154 1826875 out.go:179] * [functional-498341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:21:45.760190 1826875 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:21:45.760291 1826875 notify.go:221] Checking for updates...
	I1124 09:21:45.766186 1826875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:21:45.769174 1826875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:21:45.772188 1826875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:21:45.775289 1826875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:21:45.778333 1826875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:21:45.781989 1826875 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:21:45.782120 1826875 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:21:45.812937 1826875 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:21:45.813041 1826875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:21:45.882883 1826875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 09:21:45.872964508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:21:45.882982 1826875 docker.go:319] overlay module found
	I1124 09:21:45.886170 1826875 out.go:179] * Using the docker driver based on existing profile
	I1124 09:21:45.889003 1826875 start.go:309] selected driver: docker
	I1124 09:21:45.889014 1826875 start.go:927] validating driver "docker" against &{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:21:45.889150 1826875 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:21:45.889260 1826875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:21:45.955827 1826875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 09:21:45.946686867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:21:45.956233 1826875 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:21:45.956258 1826875 cni.go:84] Creating CNI manager for ""
	I1124 09:21:45.956313 1826875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:21:45.956360 1826875 start.go:353] cluster config:
	{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:21:45.959489 1826875 out.go:179] * Starting "functional-498341" primary control-plane node in "functional-498341" cluster
	I1124 09:21:45.962288 1826875 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:21:45.965300 1826875 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:21:45.968223 1826875 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:21:45.968456 1826875 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:21:45.968483 1826875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1124 09:21:45.968491 1826875 cache.go:65] Caching tarball of preloaded images
	I1124 09:21:45.968572 1826875 preload.go:238] Found /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 09:21:45.968580 1826875 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 09:21:45.968700 1826875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/config.json ...
	I1124 09:21:45.989414 1826875 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:21:45.989424 1826875 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:21:45.989451 1826875 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:21:45.989481 1826875 start.go:360] acquireMachinesLock for functional-498341: {Name:mk92ae30192553f98cf7dcbc727ad92a239ba1db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:21:45.989546 1826875 start.go:364] duration metric: took 48.591µs to acquireMachinesLock for "functional-498341"
	I1124 09:21:45.989566 1826875 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:21:45.989570 1826875 fix.go:54] fixHost starting: 
	I1124 09:21:45.989832 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:21:46.010042 1826875 fix.go:112] recreateIfNeeded on functional-498341: state=Running err=<nil>
	W1124 09:21:46.010070 1826875 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:21:46.013569 1826875 out.go:252] * Updating the running docker "functional-498341" container ...
	I1124 09:21:46.013602 1826875 machine.go:94] provisionDockerMachine start ...
	I1124 09:21:46.013705 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:46.032544 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:46.032913 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:46.032921 1826875 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:21:46.189282 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-498341
	
	I1124 09:21:46.189296 1826875 ubuntu.go:182] provisioning hostname "functional-498341"
	I1124 09:21:46.189397 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:46.206535 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:46.206833 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:46.206841 1826875 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-498341 && echo "functional-498341" | sudo tee /etc/hostname
	I1124 09:21:46.366656 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-498341
	
	I1124 09:21:46.366740 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:46.385164 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:46.385458 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:46.385480 1826875 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-498341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-498341/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-498341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:21:46.537521 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:21:46.537536 1826875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:21:46.537558 1826875 ubuntu.go:190] setting up certificates
	I1124 09:21:46.537567 1826875 provision.go:84] configureAuth start
	I1124 09:21:46.537625 1826875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-498341
	I1124 09:21:46.556077 1826875 provision.go:143] copyHostCerts
	I1124 09:21:46.556148 1826875 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:21:46.556162 1826875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:21:46.556241 1826875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:21:46.556340 1826875 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:21:46.556344 1826875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:21:46.556369 1826875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:21:46.556417 1826875 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:21:46.556420 1826875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:21:46.556442 1826875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:21:46.556485 1826875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-498341 san=[127.0.0.1 192.168.49.2 functional-498341 localhost minikube]
	I1124 09:21:47.106083 1826875 provision.go:177] copyRemoteCerts
	I1124 09:21:47.106143 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:21:47.106180 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:47.124170 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:47.233216 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:21:47.250958 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:21:47.269786 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:21:47.287892 1826875 provision.go:87] duration metric: took 750.311721ms to configureAuth
	I1124 09:21:47.287909 1826875 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:21:47.288110 1826875 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:21:47.288209 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:47.306094 1826875 main.go:143] libmachine: Using SSH client type: native
	I1124 09:21:47.306454 1826875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35000 <nil> <nil>}
	I1124 09:21:47.306465 1826875 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:21:52.727122 1826875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:21:52.727134 1826875 machine.go:97] duration metric: took 6.713525843s to provisionDockerMachine
	I1124 09:21:52.727145 1826875 start.go:293] postStartSetup for "functional-498341" (driver="docker")
	I1124 09:21:52.727155 1826875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:21:52.727217 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:21:52.727258 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:52.745490 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:52.849173 1826875 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:21:52.852715 1826875 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:21:52.852733 1826875 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:21:52.852742 1826875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:21:52.852796 1826875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:21:52.852869 1826875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:21:52.852954 1826875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:21:52.852997 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:21:52.860716 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:21:52.878886 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:21:52.896922 1826875 start.go:296] duration metric: took 169.76288ms for postStartSetup
	I1124 09:21:52.897014 1826875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:21:52.897052 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:52.914395 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:53.014704 1826875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:21:53.019679 1826875 fix.go:56] duration metric: took 7.030101194s for fixHost
	I1124 09:21:53.019696 1826875 start.go:83] releasing machines lock for "functional-498341", held for 7.030141999s
	I1124 09:21:53.019785 1826875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-498341
	I1124 09:21:53.037767 1826875 ssh_runner.go:195] Run: cat /version.json
	I1124 09:21:53.037808 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:53.037848 1826875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:21:53.037913 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:21:53.059451 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:53.063440 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:21:53.250372 1826875 ssh_runner.go:195] Run: systemctl --version
	I1124 09:21:53.257034 1826875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:21:53.293945 1826875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:21:53.298334 1826875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:21:53.298402 1826875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:21:53.306478 1826875 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:21:53.306492 1826875 start.go:496] detecting cgroup driver to use...
	I1124 09:21:53.306524 1826875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:21:53.306577 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:21:53.322766 1826875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:21:53.336145 1826875 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:21:53.336213 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:21:53.353503 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:21:53.367262 1826875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:21:53.501572 1826875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:21:53.650546 1826875 docker.go:234] disabling docker service ...
	I1124 09:21:53.650605 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:21:53.666421 1826875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:21:53.679764 1826875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:21:53.824985 1826875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:21:53.973752 1826875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:21:53.987475 1826875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:21:54.006394 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:54.165352 1826875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:21:54.165418 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.174992 1826875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:21:54.175074 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.184118 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.193149 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.202210 1826875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:21:54.210661 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.219837 1826875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.229290 1826875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:21:54.239225 1826875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:21:54.247116 1826875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:21:54.254543 1826875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:21:54.384822 1826875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:21:59.123618 1826875 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.738772424s)
	I1124 09:21:59.123633 1826875 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:21:59.123693 1826875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:21:59.127660 1826875 start.go:564] Will wait 60s for crictl version
	I1124 09:21:59.127714 1826875 ssh_runner.go:195] Run: which crictl
	I1124 09:21:59.131501 1826875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:21:59.160071 1826875 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:21:59.160144 1826875 ssh_runner.go:195] Run: crio --version
	I1124 09:21:59.188514 1826875 ssh_runner.go:195] Run: crio --version
	I1124 09:21:59.220805 1826875 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 09:21:59.223766 1826875 cli_runner.go:164] Run: docker network inspect functional-498341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:21:59.240048 1826875 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:21:59.247790 1826875 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 09:21:59.250720 1826875 kubeadm.go:884] updating cluster {Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:21:59.250946 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.415297 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.594588 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.788011 1826875 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:21:59.788155 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:21:59.935549 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:22:00.083592 1826875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 09:22:00.371846 1826875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:22:00.498401 1826875 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:22:00.498413 1826875 crio.go:433] Images already preloaded, skipping extraction
	I1124 09:22:00.498467 1826875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:22:00.591742 1826875 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:22:00.591755 1826875 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:22:00.591761 1826875 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.2 crio true true} ...
	I1124 09:22:00.591864 1826875 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-498341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:22:00.591942 1826875 ssh_runner.go:195] Run: crio config
	I1124 09:22:00.810572 1826875 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 09:22:00.811684 1826875 cni.go:84] Creating CNI manager for ""
	I1124 09:22:00.811697 1826875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:22:00.811712 1826875 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:22:00.811737 1826875 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-498341 NodeName:functional-498341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:22:00.811877 1826875 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-498341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:22:00.811951 1826875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:22:00.821153 1826875 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:22:00.821223 1826875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:22:00.836226 1826875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 09:22:00.861148 1826875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:22:00.884841 1826875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1124 09:22:00.907124 1826875 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:22:00.915166 1826875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:22:01.173356 1826875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:22:01.192408 1826875 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341 for IP: 192.168.49.2
	I1124 09:22:01.192419 1826875 certs.go:195] generating shared ca certs ...
	I1124 09:22:01.192433 1826875 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:22:01.192585 1826875 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:22:01.192628 1826875 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:22:01.192634 1826875 certs.go:257] generating profile certs ...
	I1124 09:22:01.192716 1826875 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.key
	I1124 09:22:01.192764 1826875 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/apiserver.key.fe75fa91
	I1124 09:22:01.192803 1826875 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/proxy-client.key
	I1124 09:22:01.192928 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:22:01.192958 1826875 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:22:01.192966 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:22:01.192992 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:22:01.193018 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:22:01.193041 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:22:01.193084 1826875 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:22:01.193768 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:22:01.217843 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:22:01.245248 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:22:01.275020 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:22:01.305798 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:22:01.328708 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:22:01.359997 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:22:01.387613 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:22:01.440861 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:22:01.509658 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:22:01.544601 1826875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:22:01.578782 1826875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:22:01.600041 1826875 ssh_runner.go:195] Run: openssl version
	I1124 09:22:01.608208 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:22:01.623861 1826875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:22:01.628355 1826875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:19 /usr/share/ca-certificates/18067042.pem
	I1124 09:22:01.628422 1826875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:22:01.686444 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:22:01.695877 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:22:01.710549 1826875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:22:01.715127 1826875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:22:01.715184 1826875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:22:01.781054 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:22:01.792601 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:22:01.806392 1826875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:22:01.813872 1826875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:19 /usr/share/ca-certificates/1806704.pem
	I1124 09:22:01.813933 1826875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:22:01.872953 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:22:01.888023 1826875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:22:01.896520 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:22:01.963293 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:22:02.022605 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:22:02.069864 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:22:02.123394 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:22:02.170814 1826875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:22:02.214655 1826875 kubeadm.go:401] StartCluster: {Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:22:02.214747 1826875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:22:02.214813 1826875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:22:02.258925 1826875 cri.go:89] found id: "04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e"
	I1124 09:22:02.258944 1826875 cri.go:89] found id: "49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316"
	I1124 09:22:02.258951 1826875 cri.go:89] found id: "93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295"
	I1124 09:22:02.258954 1826875 cri.go:89] found id: "1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f"
	I1124 09:22:02.258957 1826875 cri.go:89] found id: "3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26"
	I1124 09:22:02.258959 1826875 cri.go:89] found id: "d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0"
	I1124 09:22:02.258964 1826875 cri.go:89] found id: "9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186"
	I1124 09:22:02.258966 1826875 cri.go:89] found id: "fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251"
	I1124 09:22:02.258972 1826875 cri.go:89] found id: "6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131"
	I1124 09:22:02.258981 1826875 cri.go:89] found id: "220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169"
	I1124 09:22:02.258986 1826875 cri.go:89] found id: "97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2"
	I1124 09:22:02.258988 1826875 cri.go:89] found id: "e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22"
	I1124 09:22:02.258994 1826875 cri.go:89] found id: "524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf"
	I1124 09:22:02.258996 1826875 cri.go:89] found id: "145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7"
	I1124 09:22:02.258998 1826875 cri.go:89] found id: ""
	I1124 09:22:02.259052 1826875 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 09:22:02.278648 1826875 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:22:02Z" level=error msg="open /run/runc: no such file or directory"
	I1124 09:22:02.278733 1826875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:22:02.290350 1826875 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:22:02.290359 1826875 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:22:02.290418 1826875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:22:02.301883 1826875 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:02.302457 1826875 kubeconfig.go:125] found "functional-498341" server: "https://192.168.49.2:8441"
	I1124 09:22:02.303938 1826875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:22:02.318445 1826875 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 09:20:03.100427242 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 09:22:00.901955469 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 09:22:02.318454 1826875 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:22:02.318475 1826875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:22:02.318541 1826875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:22:02.370948 1826875 cri.go:89] found id: "04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e"
	I1124 09:22:02.370959 1826875 cri.go:89] found id: "49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316"
	I1124 09:22:02.370962 1826875 cri.go:89] found id: "93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295"
	I1124 09:22:02.370965 1826875 cri.go:89] found id: "1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f"
	I1124 09:22:02.370967 1826875 cri.go:89] found id: "3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26"
	I1124 09:22:02.370970 1826875 cri.go:89] found id: "d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0"
	I1124 09:22:02.370976 1826875 cri.go:89] found id: "9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186"
	I1124 09:22:02.370980 1826875 cri.go:89] found id: "fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251"
	I1124 09:22:02.370982 1826875 cri.go:89] found id: "6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131"
	I1124 09:22:02.370987 1826875 cri.go:89] found id: "220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169"
	I1124 09:22:02.370990 1826875 cri.go:89] found id: "97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2"
	I1124 09:22:02.371001 1826875 cri.go:89] found id: "e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22"
	I1124 09:22:02.371003 1826875 cri.go:89] found id: "524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf"
	I1124 09:22:02.371005 1826875 cri.go:89] found id: "145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7"
	I1124 09:22:02.371007 1826875 cri.go:89] found id: ""
	I1124 09:22:02.371012 1826875 cri.go:252] Stopping containers: [04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e 49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316 93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295 1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f 3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26 d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0 9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186 fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251 6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131 220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169 97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2 e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22 524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf 145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7]
	I1124 09:22:02.371077 1826875 ssh_runner.go:195] Run: which crictl
	I1124 09:22:02.377716 1826875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e 49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316 93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295 1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f 3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26 d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0 9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186 fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251 6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131 220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169 97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2 e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22 524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf 145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7
	I1124 09:22:23.246006 1826875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e 49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316 93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295 1d056b089078bbbd8bb45973174275c4543438608bd9ff6c7a5e525bcac3a81f 3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26 d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0 9523595bd7fa3fcef8e211c259e738f64833b0aced85f67c0a9bb4bf8eb41186 fad7634f9abf1af5d3ada0abd5b932624da4c3da57ee9c20e78ba59a491e7251 6d020444445914fd7737106a969908df8f32c7c973c852db11028c9c7d733131 220d03bfbf9da446d244fa954ca0f102421bcbf5ed5bb9e341d5b8782e6f8169 97ad9b0f5260b5c4221889028e74ee7ca26fea66a55fdd8c2ba8ea5ff14870f2 e0ec4b05b69b906a006f522e0c532746f35d8fec255a8a095dc4264958dd6f22 524a6587809d92e30a6f91917611cb4c9577d79d06e446d2ac5e5f725889f2bf 145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7:
(20.868253035s)
	I1124 09:22:23.246079 1826875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:22:23.369062 1826875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:22:23.377493 1826875 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 09:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 09:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 24 09:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov 24 09:20 /etc/kubernetes/scheduler.conf
	
	I1124 09:22:23.377557 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:22:23.385832 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:22:23.394551 1826875 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:23.394608 1826875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:22:23.402578 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:22:23.410561 1826875 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:23.410618 1826875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:22:23.418230 1826875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:22:23.426123 1826875 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:22:23.426202 1826875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:22:23.433956 1826875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:22:23.442317 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:23.489312 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:25.710006 1826875 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.220667297s)
	I1124 09:22:25.710087 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:25.939124 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:26.003887 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:26.066592 1826875 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:22:26.066663 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:26.567640 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:27.066789 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:27.082983 1826875 api_server.go:72] duration metric: took 1.016391114s to wait for apiserver process to appear ...
	I1124 09:22:27.082998 1826875 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:22:27.083015 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:30.737047 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:22:30.737062 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:22:30.737074 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:30.792407 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:22:30.792422 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:22:31.083876 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:31.092063 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:22:31.092079 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:22:31.583679 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:31.591692 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:22:31.591707 1826875 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:22:32.083304 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:32.091333 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1124 09:22:32.105031 1826875 api_server.go:141] control plane version: v1.34.2
	I1124 09:22:32.105049 1826875 api_server.go:131] duration metric: took 5.022045671s to wait for apiserver health ...
	I1124 09:22:32.105057 1826875 cni.go:84] Creating CNI manager for ""
	I1124 09:22:32.105065 1826875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:22:32.108297 1826875 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:22:32.111333 1826875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:22:32.120899 1826875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:22:32.120910 1826875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:22:32.135202 1826875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:22:32.588500 1826875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:22:32.592036 1826875 system_pods.go:59] 8 kube-system pods found
	I1124 09:22:32.592054 1826875 system_pods.go:61] "coredns-66bc5c9577-vfd2t" [d8b73e41-010c-4712-933f-6ca47fb26a6a] Running
	I1124 09:22:32.592062 1826875 system_pods.go:61] "etcd-functional-498341" [06ec2eb1-a0ff-4f32-8f39-031eb5592563] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:22:32.592066 1826875 system_pods.go:61] "kindnet-dxrpc" [ac0a9329-4003-4328-9896-e4fbc9ec36bc] Running
	I1124 09:22:32.592071 1826875 system_pods.go:61] "kube-apiserver-functional-498341" [53f00290-216f-48af-8fe6-1806a95ebee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:22:32.592076 1826875 system_pods.go:61] "kube-controller-manager-functional-498341" [dbaaefa2-eeae-425b-9ce7-732b2602ad3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:22:32.592080 1826875 system_pods.go:61] "kube-proxy-4n9vx" [70c6582b-4573-4300-9bf8-c45ba36f762a] Running
	I1124 09:22:32.592084 1826875 system_pods.go:61] "kube-scheduler-functional-498341" [6f930e44-5f27-4bec-a0ac-bfe6bbc2838b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:22:32.592087 1826875 system_pods.go:61] "storage-provisioner" [4a8afbbb-2929-4192-a5d1-4e6257b297ae] Running
	I1124 09:22:32.592092 1826875 system_pods.go:74] duration metric: took 3.581782ms to wait for pod list to return data ...
	I1124 09:22:32.592098 1826875 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:22:32.595047 1826875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 09:22:32.595066 1826875 node_conditions.go:123] node cpu capacity is 2
	I1124 09:22:32.595076 1826875 node_conditions.go:105] duration metric: took 2.974455ms to run NodePressure ...
	I1124 09:22:32.595137 1826875 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:22:32.849781 1826875 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 09:22:32.853526 1826875 kubeadm.go:744] kubelet initialised
	I1124 09:22:32.853537 1826875 kubeadm.go:745] duration metric: took 3.742809ms waiting for restarted kubelet to initialise ...
	I1124 09:22:32.853550 1826875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:22:32.863017 1826875 ops.go:34] apiserver oom_adj: -16
	I1124 09:22:32.863046 1826875 kubeadm.go:602] duration metric: took 30.572682401s to restartPrimaryControlPlane
	I1124 09:22:32.863055 1826875 kubeadm.go:403] duration metric: took 30.648420719s to StartCluster
	I1124 09:22:32.863069 1826875 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:22:32.863153 1826875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:22:32.863862 1826875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:22:32.864137 1826875 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:22:32.864365 1826875 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:22:32.864493 1826875 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:22:32.864552 1826875 addons.go:70] Setting storage-provisioner=true in profile "functional-498341"
	I1124 09:22:32.864564 1826875 addons.go:239] Setting addon storage-provisioner=true in "functional-498341"
	W1124 09:22:32.864580 1826875 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:22:32.864601 1826875 host.go:66] Checking if "functional-498341" exists ...
	I1124 09:22:32.865033 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:22:32.865550 1826875 addons.go:70] Setting default-storageclass=true in profile "functional-498341"
	I1124 09:22:32.865564 1826875 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-498341"
	I1124 09:22:32.865829 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:22:32.867573 1826875 out.go:179] * Verifying Kubernetes components...
	I1124 09:22:32.871342 1826875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:22:32.904200 1826875 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:22:32.907448 1826875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:22:32.907462 1826875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:22:32.907533 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:22:32.916161 1826875 addons.go:239] Setting addon default-storageclass=true in "functional-498341"
	W1124 09:22:32.916172 1826875 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:22:32.916194 1826875 host.go:66] Checking if "functional-498341" exists ...
	I1124 09:22:32.916636 1826875 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:22:32.943283 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:22:32.962864 1826875 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:22:32.962878 1826875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:22:32.962944 1826875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:22:33.000070 1826875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:22:33.112433 1826875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:22:33.158550 1826875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:22:33.176684 1826875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:22:33.916758 1826875 node_ready.go:35] waiting up to 6m0s for node "functional-498341" to be "Ready" ...
	I1124 09:22:33.919958 1826875 node_ready.go:49] node "functional-498341" is "Ready"
	I1124 09:22:33.919974 1826875 node_ready.go:38] duration metric: took 3.198351ms for node "functional-498341" to be "Ready" ...
	I1124 09:22:33.919985 1826875 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:22:33.920042 1826875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:22:33.928015 1826875 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:22:33.930895 1826875 addons.go:530] duration metric: took 1.066478613s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:22:33.934093 1826875 api_server.go:72] duration metric: took 1.069924737s to wait for apiserver process to appear ...
	I1124 09:22:33.934109 1826875 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:22:33.934127 1826875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 09:22:33.943288 1826875 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1124 09:22:33.944302 1826875 api_server.go:141] control plane version: v1.34.2
	I1124 09:22:33.944316 1826875 api_server.go:131] duration metric: took 10.2017ms to wait for apiserver health ...
	I1124 09:22:33.944323 1826875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:22:33.947266 1826875 system_pods.go:59] 8 kube-system pods found
	I1124 09:22:33.947280 1826875 system_pods.go:61] "coredns-66bc5c9577-vfd2t" [d8b73e41-010c-4712-933f-6ca47fb26a6a] Running
	I1124 09:22:33.947310 1826875 system_pods.go:61] "etcd-functional-498341" [06ec2eb1-a0ff-4f32-8f39-031eb5592563] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:22:33.947313 1826875 system_pods.go:61] "kindnet-dxrpc" [ac0a9329-4003-4328-9896-e4fbc9ec36bc] Running
	I1124 09:22:33.947319 1826875 system_pods.go:61] "kube-apiserver-functional-498341" [53f00290-216f-48af-8fe6-1806a95ebee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:22:33.947324 1826875 system_pods.go:61] "kube-controller-manager-functional-498341" [dbaaefa2-eeae-425b-9ce7-732b2602ad3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:22:33.947327 1826875 system_pods.go:61] "kube-proxy-4n9vx" [70c6582b-4573-4300-9bf8-c45ba36f762a] Running
	I1124 09:22:33.947332 1826875 system_pods.go:61] "kube-scheduler-functional-498341" [6f930e44-5f27-4bec-a0ac-bfe6bbc2838b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:22:33.947345 1826875 system_pods.go:61] "storage-provisioner" [4a8afbbb-2929-4192-a5d1-4e6257b297ae] Running
	I1124 09:22:33.947349 1826875 system_pods.go:74] duration metric: took 3.021496ms to wait for pod list to return data ...
	I1124 09:22:33.947355 1826875 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:22:33.949471 1826875 default_sa.go:45] found service account: "default"
	I1124 09:22:33.949482 1826875 default_sa.go:55] duration metric: took 2.12358ms for default service account to be created ...
	I1124 09:22:33.949489 1826875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:22:33.952257 1826875 system_pods.go:86] 8 kube-system pods found
	I1124 09:22:33.952271 1826875 system_pods.go:89] "coredns-66bc5c9577-vfd2t" [d8b73e41-010c-4712-933f-6ca47fb26a6a] Running
	I1124 09:22:33.952280 1826875 system_pods.go:89] "etcd-functional-498341" [06ec2eb1-a0ff-4f32-8f39-031eb5592563] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:22:33.952289 1826875 system_pods.go:89] "kindnet-dxrpc" [ac0a9329-4003-4328-9896-e4fbc9ec36bc] Running
	I1124 09:22:33.952295 1826875 system_pods.go:89] "kube-apiserver-functional-498341" [53f00290-216f-48af-8fe6-1806a95ebee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:22:33.952300 1826875 system_pods.go:89] "kube-controller-manager-functional-498341" [dbaaefa2-eeae-425b-9ce7-732b2602ad3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:22:33.952305 1826875 system_pods.go:89] "kube-proxy-4n9vx" [70c6582b-4573-4300-9bf8-c45ba36f762a] Running
	I1124 09:22:33.952310 1826875 system_pods.go:89] "kube-scheduler-functional-498341" [6f930e44-5f27-4bec-a0ac-bfe6bbc2838b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:22:33.952313 1826875 system_pods.go:89] "storage-provisioner" [4a8afbbb-2929-4192-a5d1-4e6257b297ae] Running
	I1124 09:22:33.952319 1826875 system_pods.go:126] duration metric: took 2.825546ms to wait for k8s-apps to be running ...
	I1124 09:22:33.952326 1826875 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:22:33.952383 1826875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:22:33.965442 1826875 system_svc.go:56] duration metric: took 13.107312ms WaitForService to wait for kubelet
	I1124 09:22:33.965460 1826875 kubeadm.go:587] duration metric: took 1.101300429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:22:33.965475 1826875 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:22:33.968056 1826875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 09:22:33.968083 1826875 node_conditions.go:123] node cpu capacity is 2
	I1124 09:22:33.968092 1826875 node_conditions.go:105] duration metric: took 2.61317ms to run NodePressure ...
	I1124 09:22:33.968104 1826875 start.go:242] waiting for startup goroutines ...
	I1124 09:22:33.968110 1826875 start.go:247] waiting for cluster config update ...
	I1124 09:22:33.968120 1826875 start.go:256] writing updated cluster config ...
	I1124 09:22:33.968421 1826875 ssh_runner.go:195] Run: rm -f paused
	I1124 09:22:33.972146 1826875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:33.975417 1826875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfd2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:33.980466 1826875 pod_ready.go:94] pod "coredns-66bc5c9577-vfd2t" is "Ready"
	I1124 09:22:33.980480 1826875 pod_ready.go:86] duration metric: took 5.047737ms for pod "coredns-66bc5c9577-vfd2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:33.982799 1826875 pod_ready.go:83] waiting for pod "etcd-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:22:35.988686 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	W1124 09:22:38.487737 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	W1124 09:22:40.488247 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	W1124 09:22:42.488552 1826875 pod_ready.go:104] pod "etcd-functional-498341" is not "Ready", error: <nil>
	I1124 09:22:43.488334 1826875 pod_ready.go:94] pod "etcd-functional-498341" is "Ready"
	I1124 09:22:43.488348 1826875 pod_ready.go:86] duration metric: took 9.50553522s for pod "etcd-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.490851 1826875 pod_ready.go:83] waiting for pod "kube-apiserver-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.495851 1826875 pod_ready.go:94] pod "kube-apiserver-functional-498341" is "Ready"
	I1124 09:22:43.495866 1826875 pod_ready.go:86] duration metric: took 5.002371ms for pod "kube-apiserver-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.498483 1826875 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.504014 1826875 pod_ready.go:94] pod "kube-controller-manager-functional-498341" is "Ready"
	I1124 09:22:43.504029 1826875 pod_ready.go:86] duration metric: took 5.534135ms for pod "kube-controller-manager-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.506800 1826875 pod_ready.go:83] waiting for pod "kube-proxy-4n9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.686748 1826875 pod_ready.go:94] pod "kube-proxy-4n9vx" is "Ready"
	I1124 09:22:43.686763 1826875 pod_ready.go:86] duration metric: took 179.949496ms for pod "kube-proxy-4n9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:43.886430 1826875 pod_ready.go:83] waiting for pod "kube-scheduler-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:45.086911 1826875 pod_ready.go:94] pod "kube-scheduler-functional-498341" is "Ready"
	I1124 09:22:45.086928 1826875 pod_ready.go:86] duration metric: took 1.200482863s for pod "kube-scheduler-functional-498341" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:22:45.086940 1826875 pod_ready.go:40] duration metric: took 11.114770911s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:22:45.182248 1826875 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1124 09:22:45.188854 1826875 out.go:179] * Done! kubectl is now configured to use "functional-498341" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 09:23:14 functional-498341 crio[3532]: time="2025-11-24T09:23:14.961334959Z" level=info msg="Pulling image: docker.io/nginx:latest" id=0d172539-47c8-413c-9333-b8c5a28d4807 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:23:14 functional-498341 crio[3532]: time="2025-11-24T09:23:14.962785481Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.177213242Z" level=info msg="Stopping pod sandbox: e5c7fcc4f17b02686418618931312a92d5b9c0d1a87db35d39c9139d1d773c22" id=30ebd051-d22f-40a8-85d8-d8a4e86e1f53 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.177272672Z" level=info msg="Stopped pod sandbox (already stopped): e5c7fcc4f17b02686418618931312a92d5b9c0d1a87db35d39c9139d1d773c22" id=30ebd051-d22f-40a8-85d8-d8a4e86e1f53 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.178005744Z" level=info msg="Removing pod sandbox: e5c7fcc4f17b02686418618931312a92d5b9c0d1a87db35d39c9139d1d773c22" id=4da4b52d-d3c4-410b-a0ac-69156a7ca8a5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.181628027Z" level=info msg="Removed pod sandbox: e5c7fcc4f17b02686418618931312a92d5b9c0d1a87db35d39c9139d1d773c22" id=4da4b52d-d3c4-410b-a0ac-69156a7ca8a5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.182281894Z" level=info msg="Stopping pod sandbox: 70ada30428607f4fdc57680c57c45975cc348bb0d7306a7090a93c5115b33ecd" id=2d1f38f5-9b52-48e4-a0ca-48d52ed77095 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.182332873Z" level=info msg="Stopped pod sandbox (already stopped): 70ada30428607f4fdc57680c57c45975cc348bb0d7306a7090a93c5115b33ecd" id=2d1f38f5-9b52-48e4-a0ca-48d52ed77095 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.182688849Z" level=info msg="Removing pod sandbox: 70ada30428607f4fdc57680c57c45975cc348bb0d7306a7090a93c5115b33ecd" id=38f54290-1099-4c48-b257-2def5e3e1d19 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.186120902Z" level=info msg="Removed pod sandbox: 70ada30428607f4fdc57680c57c45975cc348bb0d7306a7090a93c5115b33ecd" id=38f54290-1099-4c48-b257-2def5e3e1d19 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.18671318Z" level=info msg="Stopping pod sandbox: 8f74ab9a1e7874e8019f3fcb3e6b9f9627b9d33551556cbb6c81d61480049966" id=b6f5ff74-5964-4f06-9e45-65d4edffc701 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.186835479Z" level=info msg="Stopped pod sandbox (already stopped): 8f74ab9a1e7874e8019f3fcb3e6b9f9627b9d33551556cbb6c81d61480049966" id=b6f5ff74-5964-4f06-9e45-65d4edffc701 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.187155007Z" level=info msg="Removing pod sandbox: 8f74ab9a1e7874e8019f3fcb3e6b9f9627b9d33551556cbb6c81d61480049966" id=41277d7e-5adb-4ec6-bbdf-9a1f6445cc38 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:23:26 functional-498341 crio[3532]: time="2025-11-24T09:23:26.190671508Z" level=info msg="Removed pod sandbox: 8f74ab9a1e7874e8019f3fcb3e6b9f9627b9d33551556cbb6c81d61480049966" id=41277d7e-5adb-4ec6-bbdf-9a1f6445cc38 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 09:23:45 functional-498341 crio[3532]: time="2025-11-24T09:23:45.320061031Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:24:15 functional-498341 crio[3532]: time="2025-11-24T09:24:15.583124264Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cd4631cb-9488-4075-bfb5-c0eab2bc94a0 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:24:28 functional-498341 crio[3532]: time="2025-11-24T09:24:28.110145323Z" level=info msg="Pulling image: docker.io/nginx:latest" id=d9847c83-2d1b-44be-8dfb-35430647e0a5 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:24:28 functional-498341 crio[3532]: time="2025-11-24T09:24:28.112452906Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:24:58 functional-498341 crio[3532]: time="2025-11-24T09:24:58.410696351Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:25:04 functional-498341 crio[3532]: time="2025-11-24T09:25:04.658701015Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712\""
	Nov 24 09:25:34 functional-498341 crio[3532]: time="2025-11-24T09:25:34.944199784Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a497d6b1-e2a4-42e0-920f-38d50f667b6b name=/runtime.v1.ImageService/PullImage
	Nov 24 09:26:00 functional-498341 crio[3532]: time="2025-11-24T09:26:00.111263823Z" level=info msg="Pulling image: docker.io/nginx:latest" id=7e3bda8a-92cc-4c64-96bb-7d6186b11c46 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:26:00 functional-498341 crio[3532]: time="2025-11-24T09:26:00.160567877Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:26:30 functional-498341 crio[3532]: time="2025-11-24T09:26:30.468768508Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 24 09:27:00 functional-498341 crio[3532]: time="2025-11-24T09:27:00.771150273Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2936a903-894d-440a-ac99-7fef05fdfc62 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c637ec76ab83c       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   4 minutes ago       Running             nginx                     0                   63cf0c579e198       nginx-svc                                   default
	1fb0c9b9a85a0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  4 minutes ago       Running             storage-provisioner       4                   feaeb05d97102       storage-provisioner                         kube-system
	71b403f584411       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  4 minutes ago       Running             kube-proxy                3                   191e021fbd5e9       kube-proxy-4n9vx                            kube-system
	a891e2cd44b94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  4 minutes ago       Running             kindnet-cni               3                   4a18924a512d0       kindnet-dxrpc                               kube-system
	15e71e99b984a       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                  4 minutes ago       Running             kube-apiserver            0                   d52339a0af995       kube-apiserver-functional-498341            kube-system
	baa104dad4c40       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  4 minutes ago       Running             kube-scheduler            3                   b10cd6e774a9f       kube-scheduler-functional-498341            kube-system
	c108a7442e642       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  4 minutes ago       Running             kube-controller-manager   3                   74279f5069235       kube-controller-manager-functional-498341   kube-system
	9aa825831f876       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  4 minutes ago       Running             etcd                      3                   56ec423187fee       etcd-functional-498341                      kube-system
	559498b49775e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  5 minutes ago       Exited              storage-provisioner       3                   feaeb05d97102       storage-provisioner                         kube-system
	4f74f3fa64dec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  5 minutes ago       Running             coredns                   2                   edc04b4c96ff0       coredns-66bc5c9577-vfd2t                    kube-system
	04dc6d3814bef       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  5 minutes ago       Exited              kube-controller-manager   2                   74279f5069235       kube-controller-manager-functional-498341   kube-system
	49717583e9f2f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  5 minutes ago       Exited              etcd                      2                   56ec423187fee       etcd-functional-498341                      kube-system
	93d44c5402102       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  5 minutes ago       Exited              kube-scheduler            2                   b10cd6e774a9f       kube-scheduler-functional-498341            kube-system
	3e05c09486ff2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  5 minutes ago       Exited              kube-proxy                2                   191e021fbd5e9       kube-proxy-4n9vx                            kube-system
	d917fc755b360       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  5 minutes ago       Exited              kindnet-cni               2                   4a18924a512d0       kindnet-dxrpc                               kube-system
	145da221b5b69       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  5 minutes ago       Exited              coredns                   1                   edc04b4c96ff0       coredns-66bc5c9577-vfd2t                    kube-system
	
	
	==> coredns [145da221b5b6952829706f5d63a557fe58f15bd4c104a401d7c4c307c43b6de7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41033 - 41467 "HINFO IN 7622388333576306592.7037694830251736267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031822616s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4f74f3fa64dec4ab5760c54d4c13bd86a207e5012bffa99ac8d9fa91691713d5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36179 - 13838 "HINFO IN 7540107475593800547.8826876563962152014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057152215s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41354->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41364->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:41374->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-498341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-498341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=functional-498341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_20_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:20:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-498341
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:27:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:26:56 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:26:56 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:26:56 +0000   Mon, 24 Nov 2025 09:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:26:56 +0000   Mon, 24 Nov 2025 09:21:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-498341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                b19cc9fb-383b-4269-9c57-72146af388e0
	  Boot ID:                    27a92f9c-55a4-4798-92be-317cdb891088
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-ktl8q          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-vfd2t                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m52s
	  kube-system                 etcd-functional-498341                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m58s
	  kube-system                 kindnet-dxrpc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m53s
	  kube-system                 kube-apiserver-functional-498341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-functional-498341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m58s
	  kube-system                 kube-proxy-4n9vx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m53s
	  kube-system                 kube-scheduler-functional-498341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m51s                  kube-proxy       
	  Normal   Starting                 4m44s                  kube-proxy       
	  Normal   Starting                 5m51s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m5s (x8 over 7m5s)    kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m5s (x8 over 7m5s)    kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m5s (x8 over 7m5s)    kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     6m58s                  kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 6m58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m58s                  kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m58s                  kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m58s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m54s                  node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	  Normal   NodeReady                6m11s                  kubelet          Node functional-498341 status is now: NodeReady
	  Normal   RegisteredNode           5m48s                  node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	  Normal   NodeHasSufficientMemory  4m50s (x8 over 4m50s)  kubelet          Node functional-498341 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 4m50s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 4m50s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4m50s (x8 over 4m50s)  kubelet          Node functional-498341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m50s (x8 over 4m50s)  kubelet          Node functional-498341 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m42s                  node-controller  Node functional-498341 event: Registered Node functional-498341 in Controller
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [49717583e9f2f1306311082767a62c6033da3d6013dc959aebfcefc5f68f1316] <==
	{"level":"info","ts":"2025-11-24T09:22:01.780383Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T09:22:01.781185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:22:01.782015Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:22:01.784941Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-24T09:22:01.797885Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-24T09:22:01.798067Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T09:22:01.800939Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-11-24T09:22:02.538050Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T09:22:02.538155Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-498341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T09:22:02.538299Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T09:22:02.538404Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T09:22:02.541194Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.541338Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T09:22:02.541443Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T09:22:02.541493Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541743Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541816Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T09:22:02.541856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541928Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T09:22:02.541963Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T09:22:02.541994Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.549914Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T09:22:02.550079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T09:22:02.550142Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T09:22:02.550190Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-498341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [9aa825831f876fd8076d516a591bb4a899307d3383d1d114c317d0483577d5e2] <==
	{"level":"warn","ts":"2025-11-24T09:22:29.409320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.442309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.470956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.501897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.531688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.557002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.593633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.617017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.662247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.693042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.726777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.745495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.766147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.779633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.801975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.813601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.859384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.877233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.893999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.917660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.936429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.969446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.985823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:29.999509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:22:30.089235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:27:16 up  8:09,  0 user,  load average: 0.23, 0.98, 2.05
	Linux functional-498341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a891e2cd44b943fcb0b33577c5e1ba116b71c5708ee7e684e46226d679200d3e] <==
	I1124 09:25:11.712886       1 main.go:301] handling current node
	I1124 09:25:21.711362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:25:21.711396       1 main.go:301] handling current node
	I1124 09:25:31.710801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:25:31.710919       1 main.go:301] handling current node
	I1124 09:25:41.714023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:25:41.714066       1 main.go:301] handling current node
	I1124 09:25:51.708094       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:25:51.708235       1 main.go:301] handling current node
	I1124 09:26:01.708371       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:26:01.708406       1 main.go:301] handling current node
	I1124 09:26:11.712782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:26:11.712842       1 main.go:301] handling current node
	I1124 09:26:21.714312       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:26:21.714345       1 main.go:301] handling current node
	I1124 09:26:31.708010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:26:31.708142       1 main.go:301] handling current node
	I1124 09:26:41.708523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:26:41.708564       1 main.go:301] handling current node
	I1124 09:26:51.713310       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:26:51.713342       1 main.go:301] handling current node
	I1124 09:27:01.710626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:27:01.710674       1 main.go:301] handling current node
	I1124 09:27:11.714337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 09:27:11.714368       1 main.go:301] handling current node
	
	
	==> kindnet [d917fc755b36025672873373abda2424eb382abe2132248dbf900e3754b4abc0] <==
	I1124 09:22:00.524698       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:22:00.524978       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 09:22:00.525163       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:22:00.525178       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:22:00.525193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:22:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:22:00.843833       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:22:00.843949       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:22:00.843993       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:22:00.844168       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 09:22:10.844778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:22:10.845799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 09:22:10.846053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:22:10.852198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 09:22:21.928399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 09:22:22.048527       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 09:22:22.272677       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 09:22:22.402635       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	
	
	==> kube-apiserver [15e71e99b984ad56351b668dea7807b14fb8676c4c2532e7c2ef16079ae69280] <==
	I1124 09:22:30.895350       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 09:22:30.895633       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 09:22:30.895694       1 aggregator.go:171] initial CRD sync complete...
	I1124 09:22:30.895724       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 09:22:30.895751       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:22:30.895777       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:22:30.904992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 09:22:30.905062       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 09:22:30.910665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 09:22:30.914166       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1124 09:22:30.915452       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 09:22:30.917713       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 09:22:31.120722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:22:31.704410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:22:32.581570       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:22:32.700466       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:22:32.771113       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:22:32.778890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:22:36.293726       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:22:36.297775       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:22:36.299633       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:22:48.647984       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.152.182"}
	I1124 09:22:54.729238       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.117.210"}
	I1124 09:23:03.397458       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.131.230"}
	E1124 09:23:13.428237       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50192: use of closed network connection
	
	
	==> kube-controller-manager [04dc6d3814befd6299441a4ece661dc39272719bae76bd4c2141a50dc3765f9e] <==
	
	
	==> kube-controller-manager [c108a7442e642f500cad5954b3fface6603225ecb02334b8443c670f0ef39abc] <==
	I1124 09:22:34.212037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 09:22:34.212065       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 09:22:34.212096       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 09:22:34.214371       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:22:34.214946       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:22:34.216039       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:34.216070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:22:34.216168       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:22:34.221500       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 09:22:34.222707       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:22:34.223836       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 09:22:34.251290       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:22:34.252574       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:22:34.255166       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 09:22:34.256340       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:22:34.256393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:22:34.256439       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:22:34.257650       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:22:34.257654       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:22:34.258886       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:22:34.261228       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:22:34.261258       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:22:34.262384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:22:34.267714       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:22:34.270102       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3e05c09486ff2bde385a37304e422aa373f31de32bbc417794928249de9bfe26] <==
	I1124 09:22:01.980453       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:02.484340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1124 09:22:12.590067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-498341&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [71b403f5844112bd1e54c4ac1415199069711a4ca59aeb173507308c18b0aa8d] <==
	I1124 09:22:31.472338       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:22:31.566912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:22:31.667564       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:22:31.667606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 09:22:31.667698       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:22:31.687130       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:22:31.687187       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:22:31.691183       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:22:31.691508       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:22:31.691534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:31.694760       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:22:31.694838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:22:31.695221       1 config.go:200] "Starting service config controller"
	I1124 09:22:31.695280       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:22:31.695628       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:22:31.695698       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:22:31.696187       1 config.go:309] "Starting node config controller"
	I1124 09:22:31.696252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:22:31.696283       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:22:31.795487       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:22:31.795558       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:22:31.795818       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [93d44c54021024e7f3844f329c7eaf35fea13b30469f40fd3c5ccef46f5e0295] <==
	I1124 09:22:02.766884       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [baa104dad4c402409f627a01e3f9b0455ab0b1a3b1f384be692c3db9bf5b6e79] <==
	I1124 09:22:27.117381       1 serving.go:386] Generated self-signed cert in-memory
	W1124 09:22:30.792631       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 09:22:30.793242       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 09:22:30.793313       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 09:22:30.793345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 09:22:30.835359       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 09:22:30.835471       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:22:30.838156       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:22:30.838206       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 09:22:30.839079       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 09:22:30.839608       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 09:22:30.939283       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:24:15 functional-498341 kubelet[4036]: E1124 09:24:15.584825    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:24:16 functional-498341 kubelet[4036]: E1124 09:24:16.576390    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:24:30 functional-498341 kubelet[4036]: E1124 09:24:30.110780    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.943333    4036 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.943398    4036 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.943605    4036 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(6acc07e2-d1b3-45c3-bff6-9989cf802917): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.943649    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.944834    4036 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.944950    4036 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.945034    4036 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-ktl8q_default(ab1e6451-329d-49eb-83f1-7cc1b00f3e21): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Nov 24 09:25:34 functional-498341 kubelet[4036]: E1124 09:25:34.945063    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:25:47 functional-498341 kubelet[4036]: E1124 09:25:47.109787    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:25:48 functional-498341 kubelet[4036]: E1124 09:25:48.110621    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:26:01 functional-498341 kubelet[4036]: E1124 09:26:01.109584    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:26:14 functional-498341 kubelet[4036]: E1124 09:26:14.109935    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.769324    4036 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.769387    4036 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.769581    4036 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(6acc07e2-d1b3-45c3-bff6-9989cf802917): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.769619    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.771904    4036 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.772097    4036 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.772262    4036 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-ktl8q_default(ab1e6451-329d-49eb-83f1-7cc1b00f3e21): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Nov 24 09:27:00 functional-498341 kubelet[4036]: E1124 09:27:00.772465    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	Nov 24 09:27:15 functional-498341 kubelet[4036]: E1124 09:27:15.109854    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6acc07e2-d1b3-45c3-bff6-9989cf802917"
	Nov 24 09:27:16 functional-498341 kubelet[4036]: E1124 09:27:16.110928    4036 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ktl8q" podUID="ab1e6451-329d-49eb-83f1-7cc1b00f3e21"
	
	
	==> storage-provisioner [1fb0c9b9a85a0fec8a1ab2c37119c62c6681f8e5e630a9272f50a23e10b7fd9a] <==
	W1124 09:26:52.072326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:26:54.075529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:26:54.080082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:26:56.083911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:26:56.093600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:26:58.096398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:26:58.100815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:00.105241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:00.147437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:02.150525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:02.155188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:04.158786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:04.163359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:06.166599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:06.171475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:08.175159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:08.179597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:10.183361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:10.190708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:12.193607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:12.198361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:14.201416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:14.208338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:16.211460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:27:16.215732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [559498b49775e56118c49fa50a90d10b8e09907d7e647d35eb62a47bc1b3323c] <==
	I1124 09:22:13.582872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 09:22:23.886089       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-498341 -n functional-498341
helpers_test.go:269: (dbg) Run:  kubectl --context functional-498341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-connect-7d85dfc575-ktl8q sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-498341 describe pod hello-node-connect-7d85dfc575-ktl8q sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-498341 describe pod hello-node-connect-7d85dfc575-ktl8q sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-connect-7d85dfc575-ktl8q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:23:03 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dch74 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dch74:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m14s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ktl8q to functional-498341
	  Normal   Pulling    51s (x4 over 4m12s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     17s (x4 over 4m12s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     17s (x4 over 4m12s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x6 over 4m11s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x6 over 4m11s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498341/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 09:23:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s4942 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-s4942:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  4m3s                default-scheduler  Successfully assigned default/sp-pod to functional-498341
	  Warning  Failed     3m2s                kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     103s                kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    77s (x3 over 4m3s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     17s (x3 over 3m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     17s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x3 over 3m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2s (x3 over 3m1s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (262.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-498341 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-498341 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-t27wr" [8b2860cf-9293-4539-9b71-9d07bea924d9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1124 09:30:36.849737 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-498341 -n functional-498341
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 09:37:18.18679616 +0000 UTC m=+1490.766628498
functional_test.go:1460: (dbg) Run:  kubectl --context functional-498341 describe po hello-node-75c85bcc94-t27wr -n default
functional_test.go:1460: (dbg) kubectl --context functional-498341 describe po hello-node-75c85bcc94-t27wr -n default:
Name:             hello-node-75c85bcc94-t27wr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-498341/192.168.49.2
Start Time:       Mon, 24 Nov 2025 09:27:17 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvjpt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vvjpt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-t27wr to functional-498341
Normal   Pulling    6m19s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m9s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m9s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x16 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m54s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-498341 logs hello-node-75c85bcc94-t27wr -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-498341 logs hello-node-75c85bcc94-t27wr -n default: exit status 1 (96.57713ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-t27wr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-498341 logs hello-node-75c85bcc94-t27wr -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 service --namespace=default --https --url hello-node: exit status 115 (394.956451ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31924
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-498341 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 service hello-node --url --format={{.IP}}: exit status 115 (392.740948ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-498341 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 service hello-node --url: exit status 115 (404.194335ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31924
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-498341 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31924
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image load --daemon kicbase/echo-server:functional-498341 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-498341" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image load --daemon kicbase/echo-server:functional-498341 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-498341" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-498341
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image load --daemon kicbase/echo-server:functional-498341 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-498341" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image save kicbase/echo-server:functional-498341 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 09:37:30.726202 1835332 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:37:30.727063 1835332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:30.727082 1835332 out.go:374] Setting ErrFile to fd 2...
	I1124 09:37:30.727089 1835332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:37:30.727451 1835332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:37:30.728116 1835332 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:37:30.728369 1835332 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:37:30.728931 1835332 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
	I1124 09:37:30.747567 1835332 ssh_runner.go:195] Run: systemctl --version
	I1124 09:37:30.747627 1835332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
	I1124 09:37:30.765292 1835332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
	I1124 09:37:30.872217 1835332 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1124 09:37:30.872332 1835332 cache_images.go:255] Failed to load cached images for "functional-498341": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1124 09:37:30.872389 1835332 cache_images.go:267] failed pushing to: functional-498341

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-498341
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image save --daemon kicbase/echo-server:functional-498341 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-498341
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-498341: exit status 1 (18.204233ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-498341

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-498341

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (512.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1124 09:40:36.850911 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.299969 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.306468 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.318035 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.339514 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.381056 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.462570 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.624112 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:54.945903 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:55.587312 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:56.869455 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:42:59.430876 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:43:04.552737 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:43:14.794889 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:43:35.276831 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:44:16.238231 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:45:36.850221 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:45:38.160842 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m30.685978279s)

                                                
                                                
-- stdout --
	* [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Found network options:
	  - HTTP_PROXY=localhost:45763
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:45763 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-373432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-373432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000260156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001181984s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001181984s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 6 (298.0963ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 09:46:58.126858 1843796 status.go:458] kubeconfig endpoint: get endpoint: "functional-373432" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image save kicbase/echo-server:functional-498341 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image rm kicbase/echo-server:functional-498341 --alsologtostderr                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image save --daemon kicbase/echo-server:functional-498341 --alsologtostderr                                                             │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/1806704.pem                                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/1806704.pem                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/18067042.pem                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/18067042.pem                                                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/test/nested/copy/1806704/hosts                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format short --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format yaml --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh pgrep buildkitd                                                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │                     │
	│ image          │ functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ delete         │ -p functional-498341                                                                                                                                      │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │ 24 Nov 25 09:38 UTC │
	│ start          │ -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:38:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:38:27.174582 1837413 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:38:27.174683 1837413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:38:27.174688 1837413 out.go:374] Setting ErrFile to fd 2...
	I1124 09:38:27.174692 1837413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:38:27.174930 1837413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:38:27.175319 1837413 out.go:368] Setting JSON to false
	I1124 09:38:27.176121 1837413 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30058,"bootTime":1763947050,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:38:27.176174 1837413 start.go:143] virtualization:  
	I1124 09:38:27.183406 1837413 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:38:27.187226 1837413 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:38:27.187339 1837413 notify.go:221] Checking for updates...
	I1124 09:38:27.194447 1837413 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:38:27.197818 1837413 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:38:27.201136 1837413 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:38:27.204452 1837413 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:38:27.207687 1837413 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:38:27.211060 1837413 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:38:27.240110 1837413 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:38:27.240226 1837413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:38:27.297749 1837413 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-24 09:38:27.288040767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:38:27.297837 1837413 docker.go:319] overlay module found
	I1124 09:38:27.301168 1837413 out.go:179] * Using the docker driver based on user configuration
	I1124 09:38:27.304303 1837413 start.go:309] selected driver: docker
	I1124 09:38:27.304313 1837413 start.go:927] validating driver "docker" against <nil>
	I1124 09:38:27.304324 1837413 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:38:27.305048 1837413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:38:27.357639 1837413 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-24 09:38:27.348977796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:38:27.357789 1837413 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:38:27.357998 1837413 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:38:27.361087 1837413 out.go:179] * Using Docker driver with root privileges
	I1124 09:38:27.364027 1837413 cni.go:84] Creating CNI manager for ""
	I1124 09:38:27.364088 1837413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:38:27.364095 1837413 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:38:27.364181 1837413 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:38:27.367355 1837413 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:38:27.370179 1837413 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:38:27.373202 1837413 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:38:27.376081 1837413 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:38:27.376144 1837413 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:38:27.394759 1837413 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:38:27.394770 1837413 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:38:27.431926 1837413 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:38:27.566314 1837413 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:38:27.566652 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:27.566694 1837413 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:38:27.566720 1837413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json: {Name:mkccf2da3908fd70b657b2414588722630e89e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:27.566890 1837413 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:38:27.566914 1837413 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:27.566956 1837413 start.go:364] duration metric: took 34.051µs to acquireMachinesLock for "functional-373432"
	I1124 09:38:27.566973 1837413 start.go:93] Provisioning new machine with config: &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:38:27.567026 1837413 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:38:27.572556 1837413 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1124 09:38:27.572846 1837413 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:45763 to docker env.
	I1124 09:38:27.572868 1837413 start.go:159] libmachine.API.Create for "functional-373432" (driver="docker")
	I1124 09:38:27.572893 1837413 client.go:173] LocalClient.Create starting
	I1124 09:38:27.573000 1837413 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem
	I1124 09:38:27.573033 1837413 main.go:143] libmachine: Decoding PEM data...
	I1124 09:38:27.573051 1837413 main.go:143] libmachine: Parsing certificate...
	I1124 09:38:27.573097 1837413 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem
	I1124 09:38:27.573146 1837413 main.go:143] libmachine: Decoding PEM data...
	I1124 09:38:27.573157 1837413 main.go:143] libmachine: Parsing certificate...
	I1124 09:38:27.573558 1837413 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:38:27.590831 1837413 cli_runner.go:211] docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:38:27.590904 1837413 network_create.go:284] running [docker network inspect functional-373432] to gather additional debugging logs...
	I1124 09:38:27.590928 1837413 cli_runner.go:164] Run: docker network inspect functional-373432
	W1124 09:38:27.610535 1837413 cli_runner.go:211] docker network inspect functional-373432 returned with exit code 1
	I1124 09:38:27.610556 1837413 network_create.go:287] error running [docker network inspect functional-373432]: docker network inspect functional-373432: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-373432 not found
	I1124 09:38:27.610569 1837413 network_create.go:289] output of [docker network inspect functional-373432]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-373432 not found
	
	** /stderr **
	I1124 09:38:27.610694 1837413 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:38:27.628556 1837413 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001929da0}
	I1124 09:38:27.628593 1837413 network_create.go:124] attempt to create docker network functional-373432 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 09:38:27.628654 1837413 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-373432 functional-373432
	I1124 09:38:27.698227 1837413 network_create.go:108] docker network functional-373432 192.168.49.0/24 created
	I1124 09:38:27.698248 1837413 kic.go:121] calculated static IP "192.168.49.2" for the "functional-373432" container
	I1124 09:38:27.698328 1837413 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:38:27.716691 1837413 cli_runner.go:164] Run: docker volume create functional-373432 --label name.minikube.sigs.k8s.io=functional-373432 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:38:27.734091 1837413 oci.go:103] Successfully created a docker volume functional-373432
	I1124 09:38:27.734162 1837413 cli_runner.go:164] Run: docker run --rm --name functional-373432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-373432 --entrypoint /usr/bin/test -v functional-373432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:38:27.771570 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:27.943090 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:28.147542 1837413 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147548 1837413 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147597 1837413 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147618 1837413 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147653 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:38:28.147661 1837413 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 64.026µs
	I1124 09:38:28.147671 1837413 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:38:28.147675 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:38:28.147681 1837413 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 64.223µs
	I1124 09:38:28.147681 1837413 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147694 1837413 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:38:28.147704 1837413 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147712 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:38:28.147716 1837413 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 36.152µs
	I1124 09:38:28.147721 1837413 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:38:28.147731 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:38:28.147737 1837413 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.535µs
	I1124 09:38:28.147733 1837413 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147742 1837413 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:38:28.147750 1837413 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:38:28.147760 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:38:28.147765 1837413 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 34.232µs
	I1124 09:38:28.147772 1837413 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:38:28.147775 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:38:28.147779 1837413 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 29.719µs
	I1124 09:38:28.147784 1837413 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:38:28.147786 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:38:28.147790 1837413 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 270.01µs
	I1124 09:38:28.147794 1837413 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:38:28.147800 1837413 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:38:28.147804 1837413 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 267.531µs
	I1124 09:38:28.147809 1837413 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:38:28.147823 1837413 cache.go:87] Successfully saved all images to host disk.
	I1124 09:38:28.327714 1837413 oci.go:107] Successfully prepared a docker volume functional-373432
	I1124 09:38:28.327775 1837413 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1124 09:38:28.327917 1837413 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 09:38:28.328039 1837413 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:38:28.386136 1837413 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-373432 --name functional-373432 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-373432 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-373432 --network functional-373432 --ip 192.168.49.2 --volume functional-373432:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:38:28.683231 1837413 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Running}}
	I1124 09:38:28.703231 1837413 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:38:28.725278 1837413 cli_runner.go:164] Run: docker exec functional-373432 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:38:28.784130 1837413 oci.go:144] the created container "functional-373432" has a running status.
	I1124 09:38:28.784149 1837413 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa...
	I1124 09:38:29.018096 1837413 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:38:29.051100 1837413 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:38:29.073260 1837413 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:38:29.073272 1837413 kic_runner.go:114] Args: [docker exec --privileged functional-373432 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:38:29.145493 1837413 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:38:29.175497 1837413 machine.go:94] provisionDockerMachine start ...
	I1124 09:38:29.175586 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:29.202869 1837413 main.go:143] libmachine: Using SSH client type: native
	I1124 09:38:29.203220 1837413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:38:29.203227 1837413 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:38:29.203810 1837413 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32904->127.0.0.1:35005: read: connection reset by peer
	I1124 09:38:32.352923 1837413 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:38:32.352938 1837413 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:38:32.353003 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:32.370453 1837413 main.go:143] libmachine: Using SSH client type: native
	I1124 09:38:32.370769 1837413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:38:32.370777 1837413 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:38:32.530284 1837413 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:38:32.530354 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:32.548230 1837413 main.go:143] libmachine: Using SSH client type: native
	I1124 09:38:32.548534 1837413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:38:32.548547 1837413 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:38:32.697592 1837413 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:38:32.697613 1837413 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:38:32.697643 1837413 ubuntu.go:190] setting up certificates
	I1124 09:38:32.697651 1837413 provision.go:84] configureAuth start
	I1124 09:38:32.697712 1837413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:38:32.716596 1837413 provision.go:143] copyHostCerts
	I1124 09:38:32.716650 1837413 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:38:32.716658 1837413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:38:32.716730 1837413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:38:32.716817 1837413 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:38:32.716820 1837413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:38:32.716844 1837413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:38:32.716890 1837413 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:38:32.716894 1837413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:38:32.716916 1837413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:38:32.716958 1837413 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:38:33.026212 1837413 provision.go:177] copyRemoteCerts
	I1124 09:38:33.026279 1837413 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:38:33.026323 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:33.046553 1837413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:38:33.152947 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:38:33.173816 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:38:33.191611 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:38:33.208655 1837413 provision.go:87] duration metric: took 510.981142ms to configureAuth
	I1124 09:38:33.208673 1837413 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:38:33.208866 1837413 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:38:33.208970 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:33.230058 1837413 main.go:143] libmachine: Using SSH client type: native
	I1124 09:38:33.230368 1837413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:38:33.230380 1837413 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:38:33.525037 1837413 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:38:33.525058 1837413 machine.go:97] duration metric: took 4.349550056s to provisionDockerMachine
	I1124 09:38:33.525067 1837413 client.go:176] duration metric: took 5.952170329s to LocalClient.Create
	I1124 09:38:33.525090 1837413 start.go:167] duration metric: took 5.952222301s to libmachine.API.Create "functional-373432"
	I1124 09:38:33.525096 1837413 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:38:33.525138 1837413 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:38:33.525204 1837413 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:38:33.525252 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:33.545729 1837413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:38:33.649241 1837413 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:38:33.652509 1837413 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:38:33.652525 1837413 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:38:33.652534 1837413 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:38:33.652589 1837413 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:38:33.652678 1837413 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:38:33.652755 1837413 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:38:33.652797 1837413 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:38:33.660475 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:38:33.678041 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:38:33.695370 1837413 start.go:296] duration metric: took 170.260416ms for postStartSetup
	I1124 09:38:33.695721 1837413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:38:33.712288 1837413 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:38:33.712557 1837413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:38:33.712598 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:33.731147 1837413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:38:33.834017 1837413 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:38:33.838477 1837413 start.go:128] duration metric: took 6.271436674s to createHost
	I1124 09:38:33.838493 1837413 start.go:83] releasing machines lock for "functional-373432", held for 6.271529138s
	I1124 09:38:33.838563 1837413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:38:33.859949 1837413 out.go:179] * Found network options:
	I1124 09:38:33.863022 1837413 out.go:179]   - HTTP_PROXY=localhost:45763
	W1124 09:38:33.866046 1837413 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1124 09:38:33.869090 1837413 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1124 09:38:33.872002 1837413 ssh_runner.go:195] Run: cat /version.json
	I1124 09:38:33.872046 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:33.872058 1837413 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:38:33.872123 1837413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:38:33.895014 1837413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:38:33.895698 1837413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:38:34.086313 1837413 ssh_runner.go:195] Run: systemctl --version
	I1124 09:38:34.092555 1837413 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:38:34.126975 1837413 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:38:34.131345 1837413 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:38:34.131407 1837413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:38:34.160353 1837413 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 09:38:34.160374 1837413 start.go:496] detecting cgroup driver to use...
	I1124 09:38:34.160406 1837413 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:38:34.160482 1837413 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:38:34.178422 1837413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:38:34.190798 1837413 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:38:34.190852 1837413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:38:34.208460 1837413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:38:34.227771 1837413 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:38:34.345847 1837413 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:38:34.471284 1837413 docker.go:234] disabling docker service ...
	I1124 09:38:34.471358 1837413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:38:34.499249 1837413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:38:34.512755 1837413 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:38:34.633119 1837413 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:38:34.754717 1837413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:38:34.767493 1837413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:38:34.783145 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:34.928004 1837413 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:38:34.928077 1837413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:38:34.937567 1837413 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:38:34.937641 1837413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:38:34.946833 1837413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:38:34.955776 1837413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:38:34.964451 1837413 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:38:34.972565 1837413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:38:34.981421 1837413 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:38:34.995887 1837413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:38:35.006023 1837413 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:38:35.015436 1837413 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:38:35.023906 1837413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:38:35.138359 1837413 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:38:35.324460 1837413 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:38:35.324531 1837413 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:38:35.328190 1837413 start.go:564] Will wait 60s for crictl version
	I1124 09:38:35.328248 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:35.331697 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:38:35.358170 1837413 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:38:35.358253 1837413 ssh_runner.go:195] Run: crio --version
	I1124 09:38:35.390869 1837413 ssh_runner.go:195] Run: crio --version
	I1124 09:38:35.421558 1837413 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:38:35.424373 1837413 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:38:35.440563 1837413 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:38:35.444475 1837413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:38:35.454218 1837413 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:38:35.454396 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:35.606951 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:35.752655 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:35.924210 1837413 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:38:35.924274 1837413 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:38:35.948204 1837413 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:38:35.948223 1837413 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:38:35.948285 1837413 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:38:35.948289 1837413 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:38:35.948319 1837413 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:38:35.948489 1837413 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:38:35.948489 1837413 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:38:35.948573 1837413 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:38:35.948580 1837413 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:38:35.948649 1837413 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:38:35.949908 1837413 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:38:35.950354 1837413 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:38:35.950607 1837413 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:38:35.950757 1837413 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:38:35.950873 1837413 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:38:35.950984 1837413 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:38:35.951185 1837413 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:38:35.951315 1837413 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:38:36.279091 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:38:36.280019 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 09:38:36.299191 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:38:36.303567 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:38:36.307280 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.24-0
	I1124 09:38:36.309872 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:38:36.350968 1837413 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1124 09:38:36.351000 1837413 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:38:36.351049 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:36.351113 1837413 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1124 09:38:36.351125 1837413 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:38:36.351143 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:36.375540 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:38:36.400031 1837413 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1124 09:38:36.400064 1837413 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:38:36.400120 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:36.400162 1837413 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1124 09:38:36.400209 1837413 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:38:36.400231 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:36.438423 1837413 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca" in container runtime
	I1124 09:38:36.438453 1837413 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:38:36.438499 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:36.438576 1837413 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1124 09:38:36.438587 1837413 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:38:36.438609 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:36.438702 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:38:36.438764 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:38:36.465249 1837413 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1124 09:38:36.465257 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:38:36.465287 1837413 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:38:36.465325 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:36.465345 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:38:36.465401 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:38:36.465437 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:38:36.506217 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:38:36.506312 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:38:36.569291 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:38:36.569356 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:38:36.569396 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:38:36.569442 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:38:36.569495 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:38:36.598912 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:38:36.598974 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:38:36.677711 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:38:36.677781 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:38:36.677834 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:38:36.677895 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:38:36.677953 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:38:36.710431 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1124 09:38:36.710537 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:38:36.710610 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:38:36.710663 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:38:36.766855 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:38:36.766949 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:38:36.767022 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:38:36.767069 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:38:36.767116 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:38:36.767158 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:38:36.767199 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:38:36.767237 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:38:36.767295 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:38:36.767343 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:38:36.767351 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1124 09:38:36.767386 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:38:36.767395 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1124 09:38:36.776973 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:38:36.777019 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1124 09:38:36.832799 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:38:36.832812 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (21895168 bytes)
	I1124 09:38:36.832881 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:38:36.832890 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1124 09:38:36.832944 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:38:36.832954 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1124 09:38:36.833016 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:38:36.833088 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:38:36.858237 1837413 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:38:36.858308 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1124 09:38:36.859799 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:38:36.859824 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1124 09:38:37.255187 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	W1124 09:38:37.266499 1837413 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1124 09:38:37.266666 1837413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:38:37.316969 1837413 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:38:37.317314 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:38:37.440238 1837413 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1124 09:38:37.440271 1837413 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:38:37.440552 1837413 ssh_runner.go:195] Run: which crictl
	I1124 09:38:38.905175 1837413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.587836528s)
	I1124 09:38:38.905194 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:38:38.905212 1837413 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:38:38.905224 1837413 ssh_runner.go:235] Completed: which crictl: (1.464646653s)
	I1124 09:38:38.905260 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:38:38.905279 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:38:40.070889 1837413 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.165585494s)
	I1124 09:38:40.070964 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:38:40.071031 1837413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.165755572s)
	I1124 09:38:40.071045 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:38:40.071068 1837413 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:38:40.071105 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:38:40.104080 1837413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:38:41.281708 1837413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.210581975s)
	I1124 09:38:41.281724 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:38:41.281741 1837413 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:38:41.281753 1837413 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.177653676s)
	I1124 09:38:41.281788 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:38:41.281794 1837413 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:38:41.281869 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:38:41.286438 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:38:41.286464 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1124 09:38:43.224092 1837413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0: (1.94228178s)
	I1124 09:38:43.224110 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:38:43.224133 1837413 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:38:43.224196 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:38:44.554243 1837413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.330025548s)
	I1124 09:38:44.554269 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:38:44.554287 1837413 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:38:44.554339 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:38:45.873272 1837413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.318912457s)
	I1124 09:38:45.873291 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:38:45.873310 1837413 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:38:45.873359 1837413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:38:46.418711 1837413 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:38:46.418746 1837413 cache_images.go:125] Successfully loaded all cached images
	I1124 09:38:46.418750 1837413 cache_images.go:94] duration metric: took 10.470513638s to LoadCachedImages
	I1124 09:38:46.418762 1837413 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:38:46.418849 1837413 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:38:46.418930 1837413 ssh_runner.go:195] Run: crio config
	I1124 09:38:46.494837 1837413 cni.go:84] Creating CNI manager for ""
	I1124 09:38:46.494847 1837413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:38:46.494869 1837413 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:38:46.494893 1837413 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:38:46.495011 1837413 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:38:46.495082 1837413 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:38:46.502959 1837413 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:38:46.503018 1837413 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:38:46.510774 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1124 09:38:46.510799 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1124 09:38:46.510848 1837413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:38:46.510862 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:38:46.510919 1837413 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:38:46.510970 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:38:46.524964 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:38:46.524996 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1124 09:38:46.525070 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:38:46.525079 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1124 09:38:46.525285 1837413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:38:46.532847 1837413 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:38:46.532878 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1124 09:38:47.374042 1837413 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:38:47.381830 1837413 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:38:47.397018 1837413 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:38:47.411256 1837413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:38:47.425041 1837413 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:38:47.428710 1837413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:38:47.438862 1837413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:38:47.547894 1837413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:38:47.566070 1837413 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:38:47.566081 1837413 certs.go:195] generating shared ca certs ...
	I1124 09:38:47.566096 1837413 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:47.566267 1837413 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:38:47.566314 1837413 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:38:47.566320 1837413 certs.go:257] generating profile certs ...
	I1124 09:38:47.566376 1837413 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:38:47.566386 1837413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt with IP's: []
	I1124 09:38:47.676896 1837413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt ...
	I1124 09:38:47.676911 1837413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: {Name:mkef16abf19b08f24a31fd9f7c7e45d653682a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:47.677129 1837413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key ...
	I1124 09:38:47.677136 1837413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key: {Name:mk44f8fec7bd000b6d6b0be8a9aed336d5e93447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:47.677256 1837413 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:38:47.677268 1837413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt.0fcdf36b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 09:38:47.831171 1837413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt.0fcdf36b ...
	I1124 09:38:47.831193 1837413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt.0fcdf36b: {Name:mkcc4bba36c6bb8fb78a8eeaecbbeab277af3c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:47.831401 1837413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b ...
	I1124 09:38:47.831417 1837413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b: {Name:mk88c0305f54b3a07e001cf3d96c022743c9447b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:47.831521 1837413 certs.go:382] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt.0fcdf36b -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt
	I1124 09:38:47.831594 1837413 certs.go:386] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key
	I1124 09:38:47.831645 1837413 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:38:47.831658 1837413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt with IP's: []
	I1124 09:38:47.995732 1837413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt ...
	I1124 09:38:47.995750 1837413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt: {Name:mk8aec65634f9efe874d44eebe863f03dd389ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:47.995953 1837413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key ...
	I1124 09:38:47.995961 1837413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key: {Name:mk096162ac8db19950e511d957ce9b47cbad7dc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:38:47.996179 1837413 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:38:47.996221 1837413 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:38:47.996230 1837413 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:38:47.996256 1837413 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:38:47.996280 1837413 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:38:47.996305 1837413 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:38:47.996347 1837413 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:38:47.996979 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:38:48.019072 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:38:48.042517 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:38:48.061466 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:38:48.080146 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:38:48.098894 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:38:48.117023 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:38:48.134905 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:38:48.152342 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:38:48.170395 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:38:48.188414 1837413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:38:48.205735 1837413 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:38:48.218709 1837413 ssh_runner.go:195] Run: openssl version
	I1124 09:38:48.224896 1837413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:38:48.233510 1837413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:38:48.237339 1837413 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:38:48.237400 1837413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:38:48.281664 1837413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:38:48.290035 1837413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:38:48.298292 1837413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:38:48.302017 1837413 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:38:48.302082 1837413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:38:48.342915 1837413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:38:48.351320 1837413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:38:48.359906 1837413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:38:48.363934 1837413 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:38:48.363992 1837413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:38:48.405004 1837413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:38:48.413187 1837413 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:38:48.416488 1837413 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:38:48.416535 1837413 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:38:48.416632 1837413 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:38:48.416710 1837413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:38:48.444739 1837413 cri.go:89] found id: ""
	I1124 09:38:48.444801 1837413 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:38:48.452571 1837413 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:38:48.460715 1837413 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:38:48.460785 1837413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:38:48.475740 1837413 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:38:48.475751 1837413 kubeadm.go:158] found existing configuration files:
	
	I1124 09:38:48.475802 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:38:48.484378 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:38:48.484434 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:38:48.492201 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:38:48.500773 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:38:48.500832 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:38:48.508595 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:38:48.517264 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:38:48.517321 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:38:48.525427 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:38:48.533450 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:38:48.533520 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:38:48.541351 1837413 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:38:48.584498 1837413 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:38:48.584712 1837413 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:38:48.657147 1837413 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:38:48.657218 1837413 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:38:48.657258 1837413 kubeadm.go:319] OS: Linux
	I1124 09:38:48.657302 1837413 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:38:48.657354 1837413 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:38:48.657407 1837413 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:38:48.657460 1837413 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:38:48.657507 1837413 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:38:48.657574 1837413 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:38:48.657624 1837413 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:38:48.657671 1837413 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:38:48.657722 1837413 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:38:48.722076 1837413 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:38:48.722186 1837413 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:38:48.722294 1837413 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:38:52.181989 1837413 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:38:52.204058 1837413 out.go:252]   - Generating certificates and keys ...
	I1124 09:38:52.204149 1837413 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:38:52.204213 1837413 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:38:52.270374 1837413 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:38:52.456666 1837413 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:38:52.745511 1837413 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:38:53.086270 1837413 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:38:53.216567 1837413 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:38:53.216892 1837413 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-373432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 09:38:53.325302 1837413 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:38:53.325606 1837413 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-373432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 09:38:53.439295 1837413 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:38:53.658628 1837413 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:38:53.768966 1837413 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:38:53.769473 1837413 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:38:53.953662 1837413 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:38:54.100711 1837413 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:38:54.294937 1837413 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:38:54.749994 1837413 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:38:54.927837 1837413 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:38:54.928838 1837413 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:38:54.931884 1837413 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:38:54.937349 1837413 out.go:252]   - Booting up control plane ...
	I1124 09:38:54.937446 1837413 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:38:54.937522 1837413 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:38:54.938071 1837413 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:38:54.954088 1837413 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:38:54.954394 1837413 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:38:54.962267 1837413 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:38:54.962620 1837413 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:38:54.962663 1837413 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:38:55.100638 1837413 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:38:55.100752 1837413 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:42:55.101243 1837413 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000260156s
	I1124 09:42:55.101286 1837413 kubeadm.go:319] 
	I1124 09:42:55.101391 1837413 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 09:42:55.101448 1837413 kubeadm.go:319] 	- The kubelet is not running
	I1124 09:42:55.101705 1837413 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 09:42:55.101712 1837413 kubeadm.go:319] 
	I1124 09:42:55.102086 1837413 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 09:42:55.102143 1837413 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 09:42:55.102197 1837413 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 09:42:55.102203 1837413 kubeadm.go:319] 
	I1124 09:42:55.107013 1837413 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 09:42:55.107465 1837413 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 09:42:55.107580 1837413 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:42:55.107833 1837413 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 09:42:55.107837 1837413 kubeadm.go:319] 
	I1124 09:42:55.107909 1837413 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1124 09:42:55.108024 1837413 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-373432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-373432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000260156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1124 09:42:55.108120 1837413 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 09:42:55.520150 1837413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:42:55.532697 1837413 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:42:55.532757 1837413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:42:55.540692 1837413 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:42:55.540701 1837413 kubeadm.go:158] found existing configuration files:
	
	I1124 09:42:55.540763 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:42:55.548478 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:42:55.548532 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:42:55.555714 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:42:55.562975 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:42:55.563029 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:42:55.570265 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:42:55.578157 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:42:55.578210 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:42:55.585416 1837413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:42:55.593330 1837413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:42:55.593385 1837413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:42:55.600640 1837413 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:42:55.703726 1837413 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 09:42:55.704163 1837413 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 09:42:55.774518 1837413 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:46:57.346390 1837413 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 09:46:57.346422 1837413 kubeadm.go:319] 
	I1124 09:46:57.346541 1837413 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 09:46:57.349576 1837413 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:46:57.349659 1837413 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:46:57.349811 1837413 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:46:57.349910 1837413 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:46:57.349968 1837413 kubeadm.go:319] OS: Linux
	I1124 09:46:57.350045 1837413 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:46:57.350129 1837413 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:46:57.350209 1837413 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:46:57.350293 1837413 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:46:57.350378 1837413 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:46:57.350468 1837413 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:46:57.350545 1837413 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:46:57.350630 1837413 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:46:57.350709 1837413 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:46:57.350836 1837413 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:46:57.351004 1837413 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:46:57.351161 1837413 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:46:57.351270 1837413 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:46:57.354063 1837413 out.go:252]   - Generating certificates and keys ...
	I1124 09:46:57.354143 1837413 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:46:57.354211 1837413 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:46:57.354287 1837413 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:46:57.354348 1837413 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:46:57.354423 1837413 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:46:57.354475 1837413 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:46:57.354543 1837413 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:46:57.354603 1837413 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:46:57.354676 1837413 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:46:57.354778 1837413 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:46:57.354827 1837413 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:46:57.354883 1837413 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:46:57.354936 1837413 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:46:57.354993 1837413 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:46:57.355051 1837413 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:46:57.355125 1837413 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:46:57.355187 1837413 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:46:57.355270 1837413 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:46:57.355338 1837413 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:46:57.359979 1837413 out.go:252]   - Booting up control plane ...
	I1124 09:46:57.360080 1837413 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:46:57.360165 1837413 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:46:57.360234 1837413 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:46:57.360338 1837413 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:46:57.360466 1837413 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:46:57.360599 1837413 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:46:57.360690 1837413 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:46:57.360728 1837413 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:46:57.360875 1837413 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:46:57.360988 1837413 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:46:57.361081 1837413 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001181984s
	I1124 09:46:57.361096 1837413 kubeadm.go:319] 
	I1124 09:46:57.361181 1837413 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 09:46:57.361228 1837413 kubeadm.go:319] 	- The kubelet is not running
	I1124 09:46:57.361333 1837413 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 09:46:57.361336 1837413 kubeadm.go:319] 
	I1124 09:46:57.361445 1837413 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 09:46:57.361478 1837413 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 09:46:57.361507 1837413 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 09:46:57.361526 1837413 kubeadm.go:319] 
	I1124 09:46:57.361576 1837413 kubeadm.go:403] duration metric: took 8m8.945047468s to StartCluster
	I1124 09:46:57.361624 1837413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:46:57.361692 1837413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:46:57.392364 1837413 cri.go:89] found id: ""
	I1124 09:46:57.392378 1837413 logs.go:282] 0 containers: []
	W1124 09:46:57.392384 1837413 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:46:57.392390 1837413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:46:57.392450 1837413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:46:57.416566 1837413 cri.go:89] found id: ""
	I1124 09:46:57.416579 1837413 logs.go:282] 0 containers: []
	W1124 09:46:57.416586 1837413 logs.go:284] No container was found matching "etcd"
	I1124 09:46:57.416591 1837413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:46:57.416650 1837413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:46:57.441660 1837413 cri.go:89] found id: ""
	I1124 09:46:57.441674 1837413 logs.go:282] 0 containers: []
	W1124 09:46:57.441681 1837413 logs.go:284] No container was found matching "coredns"
	I1124 09:46:57.441686 1837413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:46:57.441745 1837413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:46:57.477796 1837413 cri.go:89] found id: ""
	I1124 09:46:57.477811 1837413 logs.go:282] 0 containers: []
	W1124 09:46:57.477818 1837413 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:46:57.477824 1837413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:46:57.477882 1837413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:46:57.506930 1837413 cri.go:89] found id: ""
	I1124 09:46:57.506943 1837413 logs.go:282] 0 containers: []
	W1124 09:46:57.506950 1837413 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:46:57.506956 1837413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:46:57.507013 1837413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:46:57.537451 1837413 cri.go:89] found id: ""
	I1124 09:46:57.537464 1837413 logs.go:282] 0 containers: []
	W1124 09:46:57.537471 1837413 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:46:57.537477 1837413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:46:57.537535 1837413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:46:57.563820 1837413 cri.go:89] found id: ""
	I1124 09:46:57.563833 1837413 logs.go:282] 0 containers: []
	W1124 09:46:57.563841 1837413 logs.go:284] No container was found matching "kindnet"
	I1124 09:46:57.563849 1837413 logs.go:123] Gathering logs for container status ...
	I1124 09:46:57.563859 1837413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:46:57.597500 1837413 logs.go:123] Gathering logs for kubelet ...
	I1124 09:46:57.597515 1837413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:46:57.665097 1837413 logs.go:123] Gathering logs for dmesg ...
	I1124 09:46:57.665122 1837413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:46:57.680645 1837413 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:46:57.680661 1837413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:46:57.744854 1837413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:46:57.736177    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.736742    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.738451    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.739059    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.740685    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:46:57.736177    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.736742    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.738451    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.739059    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:57.740685    5740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:46:57.744866 1837413 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:46:57.744877 1837413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1124 09:46:57.788024 1837413 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001181984s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 09:46:57.788084 1837413 out.go:285] * 
	W1124 09:46:57.788151 1837413 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001181984s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 09:46:57.788167 1837413 out.go:285] * 
	W1124 09:46:57.790380 1837413 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:46:57.796083 1837413 out.go:203] 
	W1124 09:46:57.800095 1837413 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001181984s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 09:46:57.800147 1837413 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 09:46:57.800200 1837413 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 09:46:57.804013 1837413 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:38:40 functional-373432 crio[838]: time="2025-11-24T09:38:40.097691146Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=46714a64-fa23-4bc7-89f6-51be4a9c7691 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:40 functional-373432 crio[838]: time="2025-11-24T09:38:40.097967827Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=46714a64-fa23-4bc7-89f6-51be4a9c7691 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:40 functional-373432 crio[838]: time="2025-11-24T09:38:40.098021308Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=46714a64-fa23-4bc7-89f6-51be4a9c7691 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:40 functional-373432 crio[838]: time="2025-11-24T09:38:40.136378033Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=11958ef8-1602-4774-9b24-fd30b4481e03 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:40 functional-373432 crio[838]: time="2025-11-24T09:38:40.13658021Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=11958ef8-1602-4774-9b24-fd30b4481e03 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:40 functional-373432 crio[838]: time="2025-11-24T09:38:40.136634217Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=11958ef8-1602-4774-9b24-fd30b4481e03 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.727079013Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4842e138-6fc4-4c99-8544-2651293048a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.730421775Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=853e8c1b-62a4-4b3c-97c7-b2d8b66f89f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.731816779Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=74c5d8ae-32af-4037-9d5b-8b6ae0ac4c5c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.733273479Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb481621-84be-472c-a87e-c7b1c6a591bc name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.734174068Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=36eedf20-272e-460c-95ae-87885f248d55 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.735538541Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=1c8ebb6e-0d94-4e06-9a86-cda6a7bcc806 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.736485359Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c66ff42d-0ec2-4a90-869b-cab8017628ac name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.736595941Z" level=info msg="Image registry.k8s.io/etcd:3.6.5-0 not found" id=c66ff42d-0ec2-4a90-869b-cab8017628ac name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.736631634Z" level=info msg="Neither image nor artfiact registry.k8s.io/etcd:3.6.5-0 found" id=c66ff42d-0ec2-4a90-869b-cab8017628ac name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.737192789Z" level=info msg="Pulling image: registry.k8s.io/etcd:3.6.5-0" id=4cd8011d-c382-4921-a3e8-1ae83f5a3295 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:38:48 functional-373432 crio[838]: time="2025-11-24T09:38:48.738546226Z" level=info msg="Trying to access \"registry.k8s.io/etcd:3.6.5-0\""
	Nov 24 09:38:52 functional-373432 crio[838]: time="2025-11-24T09:38:52.180469548Z" level=info msg="Pulled image: registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e" id=4cd8011d-c382-4921-a3e8-1ae83f5a3295 name=/runtime.v1.ImageService/PullImage
	Nov 24 09:42:55 functional-373432 crio[838]: time="2025-11-24T09:42:55.777945395Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b5be8043-c65b-4f68-a56f-ebd94c44da50 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:42:55 functional-373432 crio[838]: time="2025-11-24T09:42:55.779514105Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bfbe22a6-8172-4059-927b-6f90f3d3c506 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:42:55 functional-373432 crio[838]: time="2025-11-24T09:42:55.780933273Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a69dcbb1-4be6-4234-8864-b5aac78a2e52 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:42:55 functional-373432 crio[838]: time="2025-11-24T09:42:55.782543394Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0cd487bc-e9f3-450d-8017-f3514efd1aad name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:42:55 functional-373432 crio[838]: time="2025-11-24T09:42:55.783628692Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=faa5c35c-92b4-4e73-a1ce-7133582048aa name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:42:55 functional-373432 crio[838]: time="2025-11-24T09:42:55.785000681Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d60c55aa-398b-4568-bce9-8b130274c6f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:42:55 functional-373432 crio[838]: time="2025-11-24T09:42:55.785779685Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=ec307ce5-2847-43c7-b1e4-713509d8a6c8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:46:58.810989    5843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:58.811589    5843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:58.813140    5843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:58.813712    5843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:46:58.815262    5843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:46:58 up  8:29,  0 user,  load average: 0.14, 0.21, 0.72
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 09:46:56 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:46:56 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Nov 24 09:46:56 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:46:56 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:46:56 functional-373432 kubelet[5657]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:46:56 functional-373432 kubelet[5657]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:46:56 functional-373432 kubelet[5657]: E1124 09:46:56.754999    5657 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:46:56 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:46:56 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:46:57 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Nov 24 09:46:57 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:46:57 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:46:57 functional-373432 kubelet[5689]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:46:57 functional-373432 kubelet[5689]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:46:57 functional-373432 kubelet[5689]: E1124 09:46:57.518363    5689 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:46:57 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:46:57 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:46:58 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Nov 24 09:46:58 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:46:58 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:46:58 functional-373432 kubelet[5761]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:46:58 functional-373432 kubelet[5761]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:46:58 functional-373432 kubelet[5761]: E1124 09:46:58.282681    5761 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:46:58 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:46:58 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 6 (357.63843ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 09:46:59.314561 1844015 status.go:458] kubeconfig endpoint: get endpoint: "functional-373432" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (512.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1124 09:46:59.331556 1806704 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373432 --alsologtostderr -v=8
E1124 09:47:54.299688 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:48:22.003576 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:50:36.849874 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:52:54.299866 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-373432 --alsologtostderr -v=8: exit status 80 (6m6.90369201s)

                                                
                                                
-- stdout --
	* [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:46:59.387016 1844089 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:46:59.387211 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387243 1844089 out.go:374] Setting ErrFile to fd 2...
	I1124 09:46:59.387263 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387557 1844089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:46:59.388008 1844089 out.go:368] Setting JSON to false
	I1124 09:46:59.388882 1844089 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30570,"bootTime":1763947050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:46:59.388979 1844089 start.go:143] virtualization:  
	I1124 09:46:59.392592 1844089 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:46:59.396303 1844089 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:46:59.396370 1844089 notify.go:221] Checking for updates...
	I1124 09:46:59.402093 1844089 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:46:59.405033 1844089 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:46:59.407908 1844089 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:46:59.411405 1844089 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:46:59.414441 1844089 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:46:59.417923 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:46:59.418109 1844089 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:46:59.451337 1844089 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:46:59.451452 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.507906 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.498692309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.508018 1844089 docker.go:319] overlay module found
	I1124 09:46:59.511186 1844089 out.go:179] * Using the docker driver based on existing profile
	I1124 09:46:59.514098 1844089 start.go:309] selected driver: docker
	I1124 09:46:59.514123 1844089 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.514235 1844089 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:46:59.514350 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.569823 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.559648119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.570237 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:46:59.570306 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:46:59.570363 1844089 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.573590 1844089 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:46:59.576497 1844089 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:46:59.579448 1844089 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:46:59.582547 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:46:59.582648 1844089 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:46:59.602755 1844089 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:46:59.602781 1844089 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:46:59.648405 1844089 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:46:59.826473 1844089 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:46:59.826636 1844089 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:46:59.826856 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:46:59.826893 1844089 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:46:59.826927 1844089 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:46:59.826975 1844089 start.go:364] duration metric: took 25.756µs to acquireMachinesLock for "functional-373432"
	I1124 09:46:59.826990 1844089 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:46:59.826996 1844089 fix.go:54] fixHost starting: 
	I1124 09:46:59.827258 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:46:59.843979 1844089 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:46:59.844011 1844089 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:46:59.847254 1844089 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:46:59.847299 1844089 machine.go:94] provisionDockerMachine start ...
	I1124 09:46:59.847379 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:46:59.872683 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:46:59.873034 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:46:59.873051 1844089 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:46:59.992797 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.044426 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.044454 1844089 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:47:00.044547 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.104810 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.105156 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.105170 1844089 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:47:00.386378 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.386611 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.409023 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.411110 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.411442 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.411467 1844089 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:00.595280 1844089 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595319 1844089 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595392 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:47:00.595381 1844089 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595403 1844089 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 139.325µs
	I1124 09:47:00.595412 1844089 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595423 1844089 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595434 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:47:00.595442 1844089 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 62.902µs
	I1124 09:47:00.595450 1844089 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595457 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:47:00.595463 1844089 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 41.207µs
	I1124 09:47:00.595469 1844089 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:47:00.595461 1844089 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595477 1844089 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595494 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:47:00.595500 1844089 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.394µs
	I1124 09:47:00.595507 1844089 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595510 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:47:00.595517 1844089 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595524 1844089 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.5µs
	I1124 09:47:00.595532 1844089 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595282 1844089 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595546 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:47:00.595552 1844089 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 36.923µs
	I1124 09:47:00.595556 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:47:00.595558 1844089 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:47:00.595562 1844089 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 302.437µs
	I1124 09:47:00.595572 1844089 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:47:00.595568 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:47:00.595581 1844089 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 263.856µs
	I1124 09:47:00.595587 1844089 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:47:00.595593 1844089 cache.go:87] Successfully saved all images to host disk.
	I1124 09:47:00.596331 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:00.596354 1844089 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:47:00.596379 1844089 ubuntu.go:190] setting up certificates
	I1124 09:47:00.596403 1844089 provision.go:84] configureAuth start
	I1124 09:47:00.596480 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:00.614763 1844089 provision.go:143] copyHostCerts
	I1124 09:47:00.614805 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614845 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:47:00.614865 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614942 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:47:00.615049 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615076 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:47:00.615081 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615111 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:47:00.615166 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615187 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:47:00.615191 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615218 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:47:00.615273 1844089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:47:00.746073 1844089 provision.go:177] copyRemoteCerts
	I1124 09:47:00.746146 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:00.746187 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.767050 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:00.873044 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1124 09:47:00.873153 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:00.891124 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1124 09:47:00.891207 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:47:00.909032 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1124 09:47:00.909209 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:47:00.927426 1844089 provision.go:87] duration metric: took 330.992349ms to configureAuth
	I1124 09:47:00.927482 1844089 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:47:00.927686 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:00.927808 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.945584 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.945906 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.945929 1844089 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:01.279482 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:01.279511 1844089 machine.go:97] duration metric: took 1.432203745s to provisionDockerMachine
	I1124 09:47:01.279522 1844089 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:47:01.279534 1844089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:01.279608 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:01.279659 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.306223 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.413310 1844089 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:01.416834 1844089 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1124 09:47:01.416855 1844089 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1124 09:47:01.416859 1844089 command_runner.go:130] > VERSION_ID="12"
	I1124 09:47:01.416863 1844089 command_runner.go:130] > VERSION="12 (bookworm)"
	I1124 09:47:01.416868 1844089 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1124 09:47:01.416884 1844089 command_runner.go:130] > ID=debian
	I1124 09:47:01.416889 1844089 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1124 09:47:01.416894 1844089 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1124 09:47:01.416900 1844089 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1124 09:47:01.416956 1844089 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:47:01.416971 1844089 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:47:01.416982 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:47:01.417038 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:47:01.417141 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:47:01.417149 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /etc/ssl/certs/18067042.pem
	I1124 09:47:01.417225 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:47:01.417238 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> /etc/test/nested/copy/1806704/hosts
	I1124 09:47:01.417285 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:47:01.425057 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:01.443829 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:47:01.461688 1844089 start.go:296] duration metric: took 182.151565ms for postStartSetup
	I1124 09:47:01.461806 1844089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:47:01.461866 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.478949 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.582285 1844089 command_runner.go:130] > 19%
	I1124 09:47:01.582359 1844089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:47:01.587262 1844089 command_runner.go:130] > 159G
	I1124 09:47:01.587296 1844089 fix.go:56] duration metric: took 1.760298367s for fixHost
	I1124 09:47:01.587308 1844089 start.go:83] releasing machines lock for "functional-373432", held for 1.76032423s
	I1124 09:47:01.587385 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:01.605227 1844089 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:01.605290 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.605558 1844089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:01.605651 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.623897 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.640948 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.724713 1844089 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1763789673-21948", "minikube_version": "v1.37.0", "commit": "2996c7ec74d570fa8ab37e6f4f8813150d0c7473"}
	I1124 09:47:01.724863 1844089 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:01.812522 1844089 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1124 09:47:01.816014 1844089 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1124 09:47:01.816053 1844089 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1124 09:47:01.816128 1844089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:01.851397 1844089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1124 09:47:01.855673 1844089 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1124 09:47:01.855841 1844089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:01.855908 1844089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:01.863705 1844089 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:47:01.863730 1844089 start.go:496] detecting cgroup driver to use...
	I1124 09:47:01.863762 1844089 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:47:01.863809 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:01.879426 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:01.892902 1844089 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:01.892974 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:01.908995 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:01.922294 1844089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:02.052541 1844089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:02.189051 1844089 docker.go:234] disabling docker service ...
	I1124 09:47:02.189218 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:02.205065 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:02.219126 1844089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:02.329712 1844089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:02.449311 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:02.462019 1844089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:02.474641 1844089 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1124 09:47:02.476035 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:02.633334 1844089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:02.633408 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.642946 1844089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:02.643028 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.652272 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.661578 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.670499 1844089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:02.678769 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.688087 1844089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.696980 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.705967 1844089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:02.713426 1844089 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1124 09:47:02.713510 1844089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:02.720989 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:02.841969 1844089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:03.036830 1844089 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:03.036905 1844089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:03.040587 1844089 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1124 09:47:03.040611 1844089 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1124 09:47:03.040618 1844089 command_runner.go:130] > Device: 0,72	Inode: 1805        Links: 1
	I1124 09:47:03.040633 1844089 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:03.040639 1844089 command_runner.go:130] > Access: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040645 1844089 command_runner.go:130] > Modify: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040654 1844089 command_runner.go:130] > Change: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040658 1844089 command_runner.go:130] >  Birth: -
	I1124 09:47:03.041299 1844089 start.go:564] Will wait 60s for crictl version
	I1124 09:47:03.041375 1844089 ssh_runner.go:195] Run: which crictl
	I1124 09:47:03.044736 1844089 command_runner.go:130] > /usr/local/bin/crictl
	I1124 09:47:03.045405 1844089 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:47:03.072144 1844089 command_runner.go:130] > Version:  0.1.0
	I1124 09:47:03.072339 1844089 command_runner.go:130] > RuntimeName:  cri-o
	I1124 09:47:03.072489 1844089 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1124 09:47:03.072634 1844089 command_runner.go:130] > RuntimeApiVersion:  v1
	I1124 09:47:03.075078 1844089 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:47:03.075181 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.102664 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.102689 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.102697 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.102702 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.102708 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.102713 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.102717 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.102722 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.102726 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.102730 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.102734 1844089 command_runner.go:130] >      static
	I1124 09:47:03.102737 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.102741 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.102745 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.102753 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.102757 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.102763 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.102768 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.102772 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.102781 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.104732 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.133953 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.133980 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.133987 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.133991 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.133996 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.134000 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.134004 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.134008 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.134012 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.134016 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.134019 1844089 command_runner.go:130] >      static
	I1124 09:47:03.134023 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.134027 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.134031 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.134039 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.134043 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.134050 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.134056 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.134060 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.134068 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.140942 1844089 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:47:03.143873 1844089 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:47:03.160952 1844089 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:03.165052 1844089 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1124 09:47:03.165287 1844089 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:03.165490 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.325050 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.479106 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.632699 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:03.632773 1844089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:03.664623 1844089 command_runner.go:130] > {
	I1124 09:47:03.664647 1844089 command_runner.go:130] >   "images":  [
	I1124 09:47:03.664652 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664661 1844089 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1124 09:47:03.664666 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664683 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1124 09:47:03.664695 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664705 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664715 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1124 09:47:03.664722 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664727 1844089 command_runner.go:130] >       "size":  "29035622",
	I1124 09:47:03.664734 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664738 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664746 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664750 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664760 1844089 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1124 09:47:03.664768 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664775 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1124 09:47:03.664780 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664788 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664797 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1124 09:47:03.664804 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664808 1844089 command_runner.go:130] >       "size":  "74488375",
	I1124 09:47:03.664816 1844089 command_runner.go:130] >       "username":  "nonroot",
	I1124 09:47:03.664820 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664827 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664831 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664838 1844089 command_runner.go:130] >       "id":  "1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca",
	I1124 09:47:03.664845 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664851 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.24-0"
	I1124 09:47:03.664855 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664859 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664873 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:62cae8d38d7e1187ef2841ebc55bef1c5a46f21a69675fae8351f92d3a3e9bc6"
	I1124 09:47:03.664880 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664885 1844089 command_runner.go:130] >       "size":  "63341525",
	I1124 09:47:03.664892 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.664896 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.664904 1844089 command_runner.go:130] >       },
	I1124 09:47:03.664908 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664923 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664929 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664932 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664939 1844089 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1124 09:47:03.664947 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664951 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1124 09:47:03.664959 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664963 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664974 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1124 09:47:03.664987 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1124 09:47:03.664994 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664999 1844089 command_runner.go:130] >       "size":  "60857170",
	I1124 09:47:03.665002 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665009 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665013 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665016 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665020 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665024 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665028 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665039 1844089 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1124 09:47:03.665043 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665053 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1124 09:47:03.665057 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665065 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665078 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1124 09:47:03.665085 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665089 1844089 command_runner.go:130] >       "size":  "84947242",
	I1124 09:47:03.665093 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665131 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665140 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665144 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665148 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665155 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665163 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665174 1844089 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1124 09:47:03.665181 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665187 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1124 09:47:03.665195 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665198 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665206 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1124 09:47:03.665213 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665217 1844089 command_runner.go:130] >       "size":  "72167568",
	I1124 09:47:03.665221 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665229 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665232 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665236 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665244 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665247 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665254 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665262 1844089 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1124 09:47:03.665269 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665275 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1124 09:47:03.665278 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665285 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665292 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1124 09:47:03.665299 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665304 1844089 command_runner.go:130] >       "size":  "74105124",
	I1124 09:47:03.665308 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665315 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665319 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665326 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665333 1844089 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1124 09:47:03.665340 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665346 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1124 09:47:03.665353 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665357 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665369 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1124 09:47:03.665376 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665380 1844089 command_runner.go:130] >       "size":  "49819792",
	I1124 09:47:03.665384 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665388 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665396 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665401 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665405 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665412 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665415 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665426 1844089 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1124 09:47:03.665434 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665439 1844089 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.665442 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665446 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665456 1844089 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1124 09:47:03.665460 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665469 1844089 command_runner.go:130] >       "size":  "517328",
	I1124 09:47:03.665473 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665478 1844089 command_runner.go:130] >         "value":  "65535"
	I1124 09:47:03.665485 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665489 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665499 1844089 command_runner.go:130] >       "pinned":  true
	I1124 09:47:03.665506 1844089 command_runner.go:130] >     }
	I1124 09:47:03.665510 1844089 command_runner.go:130] >   ]
	I1124 09:47:03.665517 1844089 command_runner.go:130] > }
	I1124 09:47:03.667798 1844089 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:47:03.667821 1844089 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:47:03.667827 1844089 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:47:03.667924 1844089 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:03.668011 1844089 ssh_runner.go:195] Run: crio config
	I1124 09:47:03.726362 1844089 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1124 09:47:03.726390 1844089 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1124 09:47:03.726403 1844089 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1124 09:47:03.726416 1844089 command_runner.go:130] > #
	I1124 09:47:03.726461 1844089 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1124 09:47:03.726469 1844089 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1124 09:47:03.726481 1844089 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1124 09:47:03.726488 1844089 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1124 09:47:03.726498 1844089 command_runner.go:130] > # reload'.
	I1124 09:47:03.726518 1844089 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1124 09:47:03.726529 1844089 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1124 09:47:03.726536 1844089 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1124 09:47:03.726563 1844089 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1124 09:47:03.726573 1844089 command_runner.go:130] > [crio]
	I1124 09:47:03.726579 1844089 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1124 09:47:03.726585 1844089 command_runner.go:130] > # containers images, in this directory.
	I1124 09:47:03.727202 1844089 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1124 09:47:03.727221 1844089 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1124 09:47:03.727766 1844089 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1124 09:47:03.727795 1844089 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1124 09:47:03.728310 1844089 command_runner.go:130] > # imagestore = ""
	I1124 09:47:03.728328 1844089 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1124 09:47:03.728337 1844089 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1124 09:47:03.728921 1844089 command_runner.go:130] > # storage_driver = "overlay"
	I1124 09:47:03.728938 1844089 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1124 09:47:03.728946 1844089 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1124 09:47:03.729270 1844089 command_runner.go:130] > # storage_option = [
	I1124 09:47:03.729595 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.729612 1844089 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1124 09:47:03.729620 1844089 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1124 09:47:03.730268 1844089 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1124 09:47:03.730286 1844089 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1124 09:47:03.730295 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1124 09:47:03.730299 1844089 command_runner.go:130] > # always happen on a node reboot
	I1124 09:47:03.730901 1844089 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1124 09:47:03.730939 1844089 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1124 09:47:03.730951 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1124 09:47:03.730957 1844089 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1124 09:47:03.731426 1844089 command_runner.go:130] > # version_file_persist = ""
	I1124 09:47:03.731444 1844089 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1124 09:47:03.731453 1844089 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1124 09:47:03.732044 1844089 command_runner.go:130] > # internal_wipe = true
	I1124 09:47:03.732064 1844089 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1124 09:47:03.732071 1844089 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1124 09:47:03.732663 1844089 command_runner.go:130] > # internal_repair = true
	I1124 09:47:03.732708 1844089 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1124 09:47:03.732717 1844089 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1124 09:47:03.732723 1844089 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1124 09:47:03.733344 1844089 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1124 09:47:03.733360 1844089 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1124 09:47:03.733364 1844089 command_runner.go:130] > [crio.api]
	I1124 09:47:03.733370 1844089 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1124 09:47:03.733954 1844089 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1124 09:47:03.733970 1844089 command_runner.go:130] > # IP address on which the stream server will listen.
	I1124 09:47:03.734597 1844089 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1124 09:47:03.734618 1844089 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1124 09:47:03.734638 1844089 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1124 09:47:03.735322 1844089 command_runner.go:130] > # stream_port = "0"
	I1124 09:47:03.735342 1844089 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1124 09:47:03.735920 1844089 command_runner.go:130] > # stream_enable_tls = false
	I1124 09:47:03.735936 1844089 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1124 09:47:03.736379 1844089 command_runner.go:130] > # stream_idle_timeout = ""
	I1124 09:47:03.736427 1844089 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1124 09:47:03.736442 1844089 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1124 09:47:03.736931 1844089 command_runner.go:130] > # stream_tls_cert = ""
	I1124 09:47:03.736947 1844089 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1124 09:47:03.736954 1844089 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1124 09:47:03.737422 1844089 command_runner.go:130] > # stream_tls_key = ""
	I1124 09:47:03.737439 1844089 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1124 09:47:03.737447 1844089 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1124 09:47:03.737466 1844089 command_runner.go:130] > # automatically pick up the changes.
	I1124 09:47:03.737919 1844089 command_runner.go:130] > # stream_tls_ca = ""
	I1124 09:47:03.737973 1844089 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.738690 1844089 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1124 09:47:03.738709 1844089 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.739334 1844089 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1124 09:47:03.739351 1844089 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1124 09:47:03.739358 1844089 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1124 09:47:03.739383 1844089 command_runner.go:130] > [crio.runtime]
	I1124 09:47:03.739395 1844089 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1124 09:47:03.739402 1844089 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1124 09:47:03.739406 1844089 command_runner.go:130] > # "nofile=1024:2048"
	I1124 09:47:03.739432 1844089 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1124 09:47:03.739736 1844089 command_runner.go:130] > # default_ulimits = [
	I1124 09:47:03.740060 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.740075 1844089 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1124 09:47:03.740677 1844089 command_runner.go:130] > # no_pivot = false
	I1124 09:47:03.740693 1844089 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1124 09:47:03.740700 1844089 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1124 09:47:03.741305 1844089 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1124 09:47:03.741322 1844089 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1124 09:47:03.741328 1844089 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1124 09:47:03.741356 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.741816 1844089 command_runner.go:130] > # conmon = ""
	I1124 09:47:03.741833 1844089 command_runner.go:130] > # Cgroup setting for conmon
	I1124 09:47:03.741841 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1124 09:47:03.742193 1844089 command_runner.go:130] > conmon_cgroup = "pod"
	I1124 09:47:03.742211 1844089 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1124 09:47:03.742237 1844089 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1124 09:47:03.742253 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.742594 1844089 command_runner.go:130] > # conmon_env = [
	I1124 09:47:03.742962 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.742977 1844089 command_runner.go:130] > # Additional environment variables to set for all the
	I1124 09:47:03.742984 1844089 command_runner.go:130] > # containers. These are overridden if set in the
	I1124 09:47:03.742990 1844089 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1124 09:47:03.743288 1844089 command_runner.go:130] > # default_env = [
	I1124 09:47:03.743607 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.743619 1844089 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1124 09:47:03.743646 1844089 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1124 09:47:03.744217 1844089 command_runner.go:130] > # selinux = false
	I1124 09:47:03.744234 1844089 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1124 09:47:03.744279 1844089 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1124 09:47:03.744293 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.744768 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.744784 1844089 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1124 09:47:03.744790 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745254 1844089 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1124 09:47:03.745273 1844089 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1124 09:47:03.745281 1844089 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1124 09:47:03.745308 1844089 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1124 09:47:03.745322 1844089 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1124 09:47:03.745328 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745934 1844089 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1124 09:47:03.745975 1844089 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1124 09:47:03.745989 1844089 command_runner.go:130] > # the cgroup blockio controller.
	I1124 09:47:03.746500 1844089 command_runner.go:130] > # blockio_config_file = ""
	I1124 09:47:03.746515 1844089 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1124 09:47:03.746541 1844089 command_runner.go:130] > # blockio parameters.
	I1124 09:47:03.747165 1844089 command_runner.go:130] > # blockio_reload = false
	I1124 09:47:03.747182 1844089 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1124 09:47:03.747187 1844089 command_runner.go:130] > # irqbalance daemon.
	I1124 09:47:03.747784 1844089 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1124 09:47:03.747803 1844089 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1124 09:47:03.747830 1844089 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1124 09:47:03.747843 1844089 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1124 09:47:03.748453 1844089 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1124 09:47:03.748471 1844089 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1124 09:47:03.748496 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.748966 1844089 command_runner.go:130] > # rdt_config_file = ""
	I1124 09:47:03.748982 1844089 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1124 09:47:03.749348 1844089 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1124 09:47:03.749364 1844089 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1124 09:47:03.749770 1844089 command_runner.go:130] > # separate_pull_cgroup = ""
	I1124 09:47:03.749788 1844089 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1124 09:47:03.749796 1844089 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1124 09:47:03.749820 1844089 command_runner.go:130] > # will be added.
	I1124 09:47:03.749833 1844089 command_runner.go:130] > # default_capabilities = [
	I1124 09:47:03.750067 1844089 command_runner.go:130] > # 	"CHOWN",
	I1124 09:47:03.750401 1844089 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1124 09:47:03.750646 1844089 command_runner.go:130] > # 	"FSETID",
	I1124 09:47:03.750659 1844089 command_runner.go:130] > # 	"FOWNER",
	I1124 09:47:03.750665 1844089 command_runner.go:130] > # 	"SETGID",
	I1124 09:47:03.750669 1844089 command_runner.go:130] > # 	"SETUID",
	I1124 09:47:03.750725 1844089 command_runner.go:130] > # 	"SETPCAP",
	I1124 09:47:03.750739 1844089 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1124 09:47:03.750745 1844089 command_runner.go:130] > # 	"KILL",
	I1124 09:47:03.750755 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.750774 1844089 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1124 09:47:03.750785 1844089 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1124 09:47:03.750991 1844089 command_runner.go:130] > # add_inheritable_capabilities = false
	I1124 09:47:03.751004 1844089 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1124 09:47:03.751023 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751034 1844089 command_runner.go:130] > default_sysctls = [
	I1124 09:47:03.751219 1844089 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1124 09:47:03.751480 1844089 command_runner.go:130] > ]
	I1124 09:47:03.751494 1844089 command_runner.go:130] > # List of devices on the host that a
	I1124 09:47:03.751501 1844089 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1124 09:47:03.751522 1844089 command_runner.go:130] > # allowed_devices = [
	I1124 09:47:03.751532 1844089 command_runner.go:130] > # 	"/dev/fuse",
	I1124 09:47:03.751536 1844089 command_runner.go:130] > # 	"/dev/net/tun",
	I1124 09:47:03.751539 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751545 1844089 command_runner.go:130] > # List of additional devices. specified as
	I1124 09:47:03.751558 1844089 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1124 09:47:03.751576 1844089 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1124 09:47:03.751614 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751625 1844089 command_runner.go:130] > # additional_devices = [
	I1124 09:47:03.751802 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751816 1844089 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1124 09:47:03.752056 1844089 command_runner.go:130] > # cdi_spec_dirs = [
	I1124 09:47:03.752288 1844089 command_runner.go:130] > # 	"/etc/cdi",
	I1124 09:47:03.752302 1844089 command_runner.go:130] > # 	"/var/run/cdi",
	I1124 09:47:03.752307 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752313 1844089 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1124 09:47:03.752348 1844089 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1124 09:47:03.752353 1844089 command_runner.go:130] > # Defaults to false.
	I1124 09:47:03.752752 1844089 command_runner.go:130] > # device_ownership_from_security_context = false
	I1124 09:47:03.752770 1844089 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1124 09:47:03.752778 1844089 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1124 09:47:03.752782 1844089 command_runner.go:130] > # hooks_dir = [
	I1124 09:47:03.752808 1844089 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1124 09:47:03.752819 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752826 1844089 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1124 09:47:03.752833 1844089 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1124 09:47:03.752842 1844089 command_runner.go:130] > # its default mounts from the following two files:
	I1124 09:47:03.752845 1844089 command_runner.go:130] > #
	I1124 09:47:03.752852 1844089 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1124 09:47:03.752858 1844089 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1124 09:47:03.752881 1844089 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1124 09:47:03.752891 1844089 command_runner.go:130] > #
	I1124 09:47:03.752897 1844089 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1124 09:47:03.752913 1844089 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1124 09:47:03.752928 1844089 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1124 09:47:03.752934 1844089 command_runner.go:130] > #      only add mounts it finds in this file.
	I1124 09:47:03.752937 1844089 command_runner.go:130] > #
	I1124 09:47:03.752941 1844089 command_runner.go:130] > # default_mounts_file = ""
	I1124 09:47:03.752946 1844089 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1124 09:47:03.752955 1844089 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1124 09:47:03.753190 1844089 command_runner.go:130] > # pids_limit = -1
	I1124 09:47:03.753207 1844089 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1124 09:47:03.753245 1844089 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1124 09:47:03.753260 1844089 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1124 09:47:03.753269 1844089 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1124 09:47:03.753278 1844089 command_runner.go:130] > # log_size_max = -1
	I1124 09:47:03.753287 1844089 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1124 09:47:03.753296 1844089 command_runner.go:130] > # log_to_journald = false
	I1124 09:47:03.753313 1844089 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1124 09:47:03.753722 1844089 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1124 09:47:03.753734 1844089 command_runner.go:130] > # Path to directory for container attach sockets.
	I1124 09:47:03.753771 1844089 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1124 09:47:03.753785 1844089 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1124 09:47:03.753789 1844089 command_runner.go:130] > # bind_mount_prefix = ""
	I1124 09:47:03.753796 1844089 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1124 09:47:03.753804 1844089 command_runner.go:130] > # read_only = false
	I1124 09:47:03.753810 1844089 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1124 09:47:03.753817 1844089 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1124 09:47:03.753824 1844089 command_runner.go:130] > # live configuration reload.
	I1124 09:47:03.753828 1844089 command_runner.go:130] > # log_level = "info"
	I1124 09:47:03.753845 1844089 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1124 09:47:03.753857 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.754025 1844089 command_runner.go:130] > # log_filter = ""
	I1124 09:47:03.754041 1844089 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754049 1844089 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1124 09:47:03.754066 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754079 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754487 1844089 command_runner.go:130] > # uid_mappings = ""
	I1124 09:47:03.754504 1844089 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754512 1844089 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1124 09:47:03.754516 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754547 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754559 1844089 command_runner.go:130] > # gid_mappings = ""
	I1124 09:47:03.754565 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1124 09:47:03.754572 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754582 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754590 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754595 1844089 command_runner.go:130] > # minimum_mappable_uid = -1
	I1124 09:47:03.754627 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1124 09:47:03.754641 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754648 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754662 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754929 1844089 command_runner.go:130] > # minimum_mappable_gid = -1
	I1124 09:47:03.754942 1844089 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1124 09:47:03.754970 1844089 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1124 09:47:03.754983 1844089 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1124 09:47:03.754989 1844089 command_runner.go:130] > # ctr_stop_timeout = 30
	I1124 09:47:03.754994 1844089 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1124 09:47:03.755006 1844089 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1124 09:47:03.755011 1844089 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1124 09:47:03.755016 1844089 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1124 09:47:03.755021 1844089 command_runner.go:130] > # drop_infra_ctr = true
	I1124 09:47:03.755048 1844089 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1124 09:47:03.755061 1844089 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1124 09:47:03.755080 1844089 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1124 09:47:03.755090 1844089 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1124 09:47:03.755098 1844089 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1124 09:47:03.755104 1844089 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1124 09:47:03.755110 1844089 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1124 09:47:03.755118 1844089 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1124 09:47:03.755122 1844089 command_runner.go:130] > # shared_cpuset = ""
	I1124 09:47:03.755135 1844089 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1124 09:47:03.755143 1844089 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1124 09:47:03.755164 1844089 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1124 09:47:03.755182 1844089 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1124 09:47:03.755369 1844089 command_runner.go:130] > # pinns_path = ""
	I1124 09:47:03.755383 1844089 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1124 09:47:03.755391 1844089 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1124 09:47:03.755617 1844089 command_runner.go:130] > # enable_criu_support = true
	I1124 09:47:03.755632 1844089 command_runner.go:130] > # Enable/disable the generation of the container,
	I1124 09:47:03.755639 1844089 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1124 09:47:03.755935 1844089 command_runner.go:130] > # enable_pod_events = false
	I1124 09:47:03.755951 1844089 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1124 09:47:03.755976 1844089 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1124 09:47:03.755988 1844089 command_runner.go:130] > # default_runtime = "crun"
	I1124 09:47:03.756007 1844089 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1124 09:47:03.756063 1844089 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1124 09:47:03.756088 1844089 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1124 09:47:03.756099 1844089 command_runner.go:130] > # creation as a file is not desired either.
	I1124 09:47:03.756108 1844089 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1124 09:47:03.756127 1844089 command_runner.go:130] > # the hostname is being managed dynamically.
	I1124 09:47:03.756133 1844089 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1124 09:47:03.756166 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.756181 1844089 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1124 09:47:03.756199 1844089 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1124 09:47:03.756211 1844089 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1124 09:47:03.756217 1844089 command_runner.go:130] > # Each entry in the table should follow the format:
	I1124 09:47:03.756220 1844089 command_runner.go:130] > #
	I1124 09:47:03.756230 1844089 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1124 09:47:03.756235 1844089 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1124 09:47:03.756244 1844089 command_runner.go:130] > # runtime_type = "oci"
	I1124 09:47:03.756248 1844089 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1124 09:47:03.756253 1844089 command_runner.go:130] > # inherit_default_runtime = false
	I1124 09:47:03.756258 1844089 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1124 09:47:03.756285 1844089 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1124 09:47:03.756297 1844089 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1124 09:47:03.756301 1844089 command_runner.go:130] > # monitor_env = []
	I1124 09:47:03.756306 1844089 command_runner.go:130] > # privileged_without_host_devices = false
	I1124 09:47:03.756313 1844089 command_runner.go:130] > # allowed_annotations = []
	I1124 09:47:03.756319 1844089 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1124 09:47:03.756330 1844089 command_runner.go:130] > # no_sync_log = false
	I1124 09:47:03.756335 1844089 command_runner.go:130] > # default_annotations = {}
	I1124 09:47:03.756339 1844089 command_runner.go:130] > # stream_websockets = false
	I1124 09:47:03.756349 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.756390 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.756402 1844089 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1124 09:47:03.756409 1844089 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1124 09:47:03.756416 1844089 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1124 09:47:03.756427 1844089 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1124 09:47:03.756448 1844089 command_runner.go:130] > #   in $PATH.
	I1124 09:47:03.756456 1844089 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1124 09:47:03.756461 1844089 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1124 09:47:03.756468 1844089 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1124 09:47:03.756477 1844089 command_runner.go:130] > #   state.
	I1124 09:47:03.756489 1844089 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1124 09:47:03.756495 1844089 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1124 09:47:03.756515 1844089 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1124 09:47:03.756528 1844089 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1124 09:47:03.756534 1844089 command_runner.go:130] > #   the values from the default runtime on load time.
	I1124 09:47:03.756542 1844089 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1124 09:47:03.756551 1844089 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1124 09:47:03.756557 1844089 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1124 09:47:03.756564 1844089 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1124 09:47:03.756571 1844089 command_runner.go:130] > #   The currently recognized values are:
	I1124 09:47:03.756579 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1124 09:47:03.756608 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1124 09:47:03.756621 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1124 09:47:03.756627 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1124 09:47:03.756635 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1124 09:47:03.756647 1844089 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1124 09:47:03.756654 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1124 09:47:03.756661 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1124 09:47:03.756671 1844089 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1124 09:47:03.756687 1844089 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1124 09:47:03.756700 1844089 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1124 09:47:03.756720 1844089 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1124 09:47:03.756731 1844089 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1124 09:47:03.756738 1844089 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1124 09:47:03.756751 1844089 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1124 09:47:03.756759 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1124 09:47:03.756769 1844089 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1124 09:47:03.756774 1844089 command_runner.go:130] > #   deprecated option "conmon".
	I1124 09:47:03.756781 1844089 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1124 09:47:03.756803 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1124 09:47:03.756820 1844089 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1124 09:47:03.756831 1844089 command_runner.go:130] > #   should be moved to the container's cgroup
	I1124 09:47:03.756843 1844089 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1124 09:47:03.756853 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1124 09:47:03.756862 1844089 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1124 09:47:03.756870 1844089 command_runner.go:130] > #   conmon-rs by using:
	I1124 09:47:03.756878 1844089 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1124 09:47:03.756886 1844089 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1124 09:47:03.756907 1844089 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1124 09:47:03.756926 1844089 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1124 09:47:03.756938 1844089 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1124 09:47:03.756945 1844089 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1124 09:47:03.756958 1844089 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1124 09:47:03.756963 1844089 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1124 09:47:03.756972 1844089 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1124 09:47:03.756984 1844089 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1124 09:47:03.756999 1844089 command_runner.go:130] > #   when a machine crash happens.
	I1124 09:47:03.757012 1844089 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1124 09:47:03.757021 1844089 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1124 09:47:03.757033 1844089 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1124 09:47:03.757038 1844089 command_runner.go:130] > #   seccomp profile for the runtime.
	I1124 09:47:03.757047 1844089 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1124 09:47:03.757058 1844089 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1124 09:47:03.757076 1844089 command_runner.go:130] > #
	I1124 09:47:03.757087 1844089 command_runner.go:130] > # Using the seccomp notifier feature:
	I1124 09:47:03.757091 1844089 command_runner.go:130] > #
	I1124 09:47:03.757115 1844089 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1124 09:47:03.757130 1844089 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1124 09:47:03.757134 1844089 command_runner.go:130] > #
	I1124 09:47:03.757141 1844089 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1124 09:47:03.757151 1844089 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1124 09:47:03.757154 1844089 command_runner.go:130] > #
	I1124 09:47:03.757165 1844089 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1124 09:47:03.757172 1844089 command_runner.go:130] > # feature.
	I1124 09:47:03.757175 1844089 command_runner.go:130] > #
	I1124 09:47:03.757195 1844089 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1124 09:47:03.757204 1844089 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1124 09:47:03.757220 1844089 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1124 09:47:03.757233 1844089 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1124 09:47:03.757239 1844089 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1124 09:47:03.757247 1844089 command_runner.go:130] > #
	I1124 09:47:03.757258 1844089 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1124 09:47:03.757268 1844089 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1124 09:47:03.757271 1844089 command_runner.go:130] > #
	I1124 09:47:03.757277 1844089 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1124 09:47:03.757283 1844089 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1124 09:47:03.757298 1844089 command_runner.go:130] > #
	I1124 09:47:03.757320 1844089 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1124 09:47:03.757333 1844089 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1124 09:47:03.757341 1844089 command_runner.go:130] > # limitation.
	I1124 09:47:03.757617 1844089 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1124 09:47:03.757630 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1124 09:47:03.757635 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.757639 1844089 command_runner.go:130] > runtime_root = "/run/crun"
	I1124 09:47:03.757643 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.757670 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.757675 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.757680 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.757690 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.757695 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.757700 1844089 command_runner.go:130] > allowed_annotations = [
	I1124 09:47:03.757954 1844089 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1124 09:47:03.757971 1844089 command_runner.go:130] > ]
	I1124 09:47:03.757978 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.757982 1844089 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1124 09:47:03.758003 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1124 09:47:03.758013 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.758018 1844089 command_runner.go:130] > runtime_root = "/run/runc"
	I1124 09:47:03.758023 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.758033 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.758037 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.758042 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.758047 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.758051 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.758456 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.758471 1844089 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1124 09:47:03.758477 1844089 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1124 09:47:03.758504 1844089 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1124 09:47:03.758514 1844089 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1124 09:47:03.758525 1844089 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1124 09:47:03.758550 1844089 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1124 09:47:03.758572 1844089 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1124 09:47:03.758585 1844089 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1124 09:47:03.758595 1844089 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1124 09:47:03.758608 1844089 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1124 09:47:03.758614 1844089 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1124 09:47:03.758621 1844089 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1124 09:47:03.758629 1844089 command_runner.go:130] > # Example:
	I1124 09:47:03.758634 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1124 09:47:03.758650 1844089 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1124 09:47:03.758663 1844089 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1124 09:47:03.758670 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1124 09:47:03.758684 1844089 command_runner.go:130] > # cpuset = "0-1"
	I1124 09:47:03.758691 1844089 command_runner.go:130] > # cpushares = "5"
	I1124 09:47:03.758695 1844089 command_runner.go:130] > # cpuquota = "1000"
	I1124 09:47:03.758700 1844089 command_runner.go:130] > # cpuperiod = "100000"
	I1124 09:47:03.758703 1844089 command_runner.go:130] > # cpulimit = "35"
	I1124 09:47:03.758714 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.758719 1844089 command_runner.go:130] > # The workload name is workload-type.
	I1124 09:47:03.758726 1844089 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1124 09:47:03.758738 1844089 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1124 09:47:03.758744 1844089 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1124 09:47:03.758763 1844089 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1124 09:47:03.758772 1844089 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1124 09:47:03.758787 1844089 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1124 09:47:03.758800 1844089 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1124 09:47:03.758805 1844089 command_runner.go:130] > # Default value is set to true
	I1124 09:47:03.758816 1844089 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1124 09:47:03.758822 1844089 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1124 09:47:03.758827 1844089 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1124 09:47:03.758837 1844089 command_runner.go:130] > # Default value is set to 'false'
	I1124 09:47:03.758841 1844089 command_runner.go:130] > # disable_hostport_mapping = false
	I1124 09:47:03.758846 1844089 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1124 09:47:03.758869 1844089 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1124 09:47:03.759115 1844089 command_runner.go:130] > # timezone = ""
	I1124 09:47:03.759131 1844089 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1124 09:47:03.759134 1844089 command_runner.go:130] > #
	I1124 09:47:03.759141 1844089 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1124 09:47:03.759163 1844089 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1124 09:47:03.759174 1844089 command_runner.go:130] > [crio.image]
	I1124 09:47:03.759180 1844089 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1124 09:47:03.759194 1844089 command_runner.go:130] > # default_transport = "docker://"
	I1124 09:47:03.759204 1844089 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1124 09:47:03.759211 1844089 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759215 1844089 command_runner.go:130] > # global_auth_file = ""
	I1124 09:47:03.759237 1844089 command_runner.go:130] > # The image used to instantiate infra containers.
	I1124 09:47:03.759259 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759457 1844089 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.759477 1844089 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1124 09:47:03.759497 1844089 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759511 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759702 1844089 command_runner.go:130] > # pause_image_auth_file = ""
	I1124 09:47:03.759716 1844089 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1124 09:47:03.759723 1844089 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1124 09:47:03.759742 1844089 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1124 09:47:03.759757 1844089 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1124 09:47:03.760047 1844089 command_runner.go:130] > # pause_command = "/pause"
	I1124 09:47:03.760064 1844089 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1124 09:47:03.760071 1844089 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1124 09:47:03.760077 1844089 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1124 09:47:03.760108 1844089 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1124 09:47:03.760115 1844089 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1124 09:47:03.760126 1844089 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1124 09:47:03.760131 1844089 command_runner.go:130] > # pinned_images = [
	I1124 09:47:03.760134 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760140 1844089 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1124 09:47:03.760146 1844089 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1124 09:47:03.760157 1844089 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1124 09:47:03.760175 1844089 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1124 09:47:03.760186 1844089 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1124 09:47:03.760191 1844089 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1124 09:47:03.760197 1844089 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1124 09:47:03.760209 1844089 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1124 09:47:03.760216 1844089 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1124 09:47:03.760225 1844089 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1124 09:47:03.760231 1844089 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1124 09:47:03.760246 1844089 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1124 09:47:03.760260 1844089 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1124 09:47:03.760282 1844089 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1124 09:47:03.760292 1844089 command_runner.go:130] > # changing them here.
	I1124 09:47:03.760298 1844089 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1124 09:47:03.760302 1844089 command_runner.go:130] > # insecure_registries = [
	I1124 09:47:03.760312 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760318 1844089 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1124 09:47:03.760329 1844089 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1124 09:47:03.760704 1844089 command_runner.go:130] > # image_volumes = "mkdir"
	I1124 09:47:03.760720 1844089 command_runner.go:130] > # Temporary directory to use for storing big files
	I1124 09:47:03.760964 1844089 command_runner.go:130] > # big_files_temporary_dir = ""
	I1124 09:47:03.760980 1844089 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1124 09:47:03.760987 1844089 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1124 09:47:03.760992 1844089 command_runner.go:130] > # auto_reload_registries = false
	I1124 09:47:03.761030 1844089 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1124 09:47:03.761047 1844089 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1124 09:47:03.761054 1844089 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1124 09:47:03.761232 1844089 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1124 09:47:03.761247 1844089 command_runner.go:130] > # The mode of short name resolution.
	I1124 09:47:03.761255 1844089 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1124 09:47:03.761263 1844089 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1124 09:47:03.761289 1844089 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1124 09:47:03.761475 1844089 command_runner.go:130] > # short_name_mode = "enforcing"
	I1124 09:47:03.761491 1844089 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1124 09:47:03.761498 1844089 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1124 09:47:03.761714 1844089 command_runner.go:130] > # oci_artifact_mount_support = true
	I1124 09:47:03.761730 1844089 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1124 09:47:03.761735 1844089 command_runner.go:130] > # CNI plugins.
	I1124 09:47:03.761738 1844089 command_runner.go:130] > [crio.network]
	I1124 09:47:03.761777 1844089 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1124 09:47:03.761790 1844089 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1124 09:47:03.761797 1844089 command_runner.go:130] > # cni_default_network = ""
	I1124 09:47:03.761810 1844089 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1124 09:47:03.761814 1844089 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1124 09:47:03.761820 1844089 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1124 09:47:03.761839 1844089 command_runner.go:130] > # plugin_dirs = [
	I1124 09:47:03.762075 1844089 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1124 09:47:03.762088 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762092 1844089 command_runner.go:130] > # List of included pod metrics.
	I1124 09:47:03.762097 1844089 command_runner.go:130] > # included_pod_metrics = [
	I1124 09:47:03.762100 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762106 1844089 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1124 09:47:03.762124 1844089 command_runner.go:130] > [crio.metrics]
	I1124 09:47:03.762136 1844089 command_runner.go:130] > # Globally enable or disable metrics support.
	I1124 09:47:03.762321 1844089 command_runner.go:130] > # enable_metrics = false
	I1124 09:47:03.762336 1844089 command_runner.go:130] > # Specify enabled metrics collectors.
	I1124 09:47:03.762342 1844089 command_runner.go:130] > # Per default all metrics are enabled.
	I1124 09:47:03.762349 1844089 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1124 09:47:03.762356 1844089 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1124 09:47:03.762386 1844089 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1124 09:47:03.762392 1844089 command_runner.go:130] > # metrics_collectors = [
	I1124 09:47:03.763119 1844089 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1124 09:47:03.763143 1844089 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1124 09:47:03.763149 1844089 command_runner.go:130] > # 	"containers_oom_total",
	I1124 09:47:03.763153 1844089 command_runner.go:130] > # 	"processes_defunct",
	I1124 09:47:03.763188 1844089 command_runner.go:130] > # 	"operations_total",
	I1124 09:47:03.763201 1844089 command_runner.go:130] > # 	"operations_latency_seconds",
	I1124 09:47:03.763207 1844089 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1124 09:47:03.763212 1844089 command_runner.go:130] > # 	"operations_errors_total",
	I1124 09:47:03.763216 1844089 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1124 09:47:03.763221 1844089 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1124 09:47:03.763226 1844089 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1124 09:47:03.763237 1844089 command_runner.go:130] > # 	"image_pulls_success_total",
	I1124 09:47:03.763260 1844089 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1124 09:47:03.763265 1844089 command_runner.go:130] > # 	"containers_oom_count_total",
	I1124 09:47:03.763270 1844089 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1124 09:47:03.763282 1844089 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1124 09:47:03.763286 1844089 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1124 09:47:03.763290 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763295 1844089 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1124 09:47:03.763300 1844089 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1124 09:47:03.763305 1844089 command_runner.go:130] > # The port on which the metrics server will listen.
	I1124 09:47:03.763313 1844089 command_runner.go:130] > # metrics_port = 9090
	I1124 09:47:03.763327 1844089 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1124 09:47:03.763337 1844089 command_runner.go:130] > # metrics_socket = ""
	I1124 09:47:03.763343 1844089 command_runner.go:130] > # The certificate for the secure metrics server.
	I1124 09:47:03.763349 1844089 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1124 09:47:03.763360 1844089 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1124 09:47:03.763365 1844089 command_runner.go:130] > # certificate on any modification event.
	I1124 09:47:03.763369 1844089 command_runner.go:130] > # metrics_cert = ""
	I1124 09:47:03.763375 1844089 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1124 09:47:03.763379 1844089 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1124 09:47:03.763384 1844089 command_runner.go:130] > # metrics_key = ""
	I1124 09:47:03.763415 1844089 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1124 09:47:03.763426 1844089 command_runner.go:130] > [crio.tracing]
	I1124 09:47:03.763442 1844089 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1124 09:47:03.763451 1844089 command_runner.go:130] > # enable_tracing = false
	I1124 09:47:03.763456 1844089 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1124 09:47:03.763461 1844089 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1124 09:47:03.763468 1844089 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1124 09:47:03.763476 1844089 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1124 09:47:03.763481 1844089 command_runner.go:130] > # CRI-O NRI configuration.
	I1124 09:47:03.763500 1844089 command_runner.go:130] > [crio.nri]
	I1124 09:47:03.763505 1844089 command_runner.go:130] > # Globally enable or disable NRI.
	I1124 09:47:03.763508 1844089 command_runner.go:130] > # enable_nri = true
	I1124 09:47:03.763524 1844089 command_runner.go:130] > # NRI socket to listen on.
	I1124 09:47:03.763535 1844089 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1124 09:47:03.763540 1844089 command_runner.go:130] > # NRI plugin directory to use.
	I1124 09:47:03.763544 1844089 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1124 09:47:03.763552 1844089 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1124 09:47:03.763560 1844089 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1124 09:47:03.763566 1844089 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1124 09:47:03.763634 1844089 command_runner.go:130] > # nri_disable_connections = false
	I1124 09:47:03.763648 1844089 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1124 09:47:03.763654 1844089 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1124 09:47:03.763669 1844089 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1124 09:47:03.763681 1844089 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1124 09:47:03.763685 1844089 command_runner.go:130] > # NRI default validator configuration.
	I1124 09:47:03.763692 1844089 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1124 09:47:03.763699 1844089 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1124 09:47:03.763703 1844089 command_runner.go:130] > # can be restricted/rejected:
	I1124 09:47:03.763707 1844089 command_runner.go:130] > # - OCI hook injection
	I1124 09:47:03.763719 1844089 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1124 09:47:03.763724 1844089 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1124 09:47:03.763730 1844089 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1124 09:47:03.763748 1844089 command_runner.go:130] > # - adjustment of linux namespaces
	I1124 09:47:03.763770 1844089 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1124 09:47:03.763778 1844089 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1124 09:47:03.763789 1844089 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1124 09:47:03.763792 1844089 command_runner.go:130] > #
	I1124 09:47:03.763797 1844089 command_runner.go:130] > # [crio.nri.default_validator]
	I1124 09:47:03.763802 1844089 command_runner.go:130] > # nri_enable_default_validator = false
	I1124 09:47:03.763807 1844089 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1124 09:47:03.763813 1844089 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1124 09:47:03.763843 1844089 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1124 09:47:03.763859 1844089 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1124 09:47:03.763864 1844089 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1124 09:47:03.763875 1844089 command_runner.go:130] > # nri_validator_required_plugins = [
	I1124 09:47:03.763879 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763885 1844089 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1124 09:47:03.763897 1844089 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1124 09:47:03.763900 1844089 command_runner.go:130] > [crio.stats]
	I1124 09:47:03.763906 1844089 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1124 09:47:03.763912 1844089 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1124 09:47:03.763930 1844089 command_runner.go:130] > # stats_collection_period = 0
	I1124 09:47:03.763938 1844089 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1124 09:47:03.763955 1844089 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1124 09:47:03.763966 1844089 command_runner.go:130] > # collection_period = 0
	I1124 09:47:03.765749 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69660512Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1124 09:47:03.765775 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696644858Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1124 09:47:03.765802 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696680353Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1124 09:47:03.765817 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696705773Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1124 09:47:03.765831 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696792248Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:03.765844 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69715048Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1124 09:47:03.765855 1844089 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1124 09:47:03.766230 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:47:03.766250 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:47:03.766285 1844089 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:03.766313 1844089 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:03.766550 1844089 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:03.766656 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:47:03.773791 1844089 command_runner.go:130] > kubeadm
	I1124 09:47:03.773812 1844089 command_runner.go:130] > kubectl
	I1124 09:47:03.773818 1844089 command_runner.go:130] > kubelet
	I1124 09:47:03.774893 1844089 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:03.774995 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:03.782726 1844089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:47:03.796280 1844089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:47:03.809559 1844089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:47:03.822485 1844089 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:03.826210 1844089 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1124 09:47:03.826334 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:03.934288 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:04.458773 1844089 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:47:04.458800 1844089 certs.go:195] generating shared ca certs ...
	I1124 09:47:04.458824 1844089 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:04.458988 1844089 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:47:04.459071 1844089 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:47:04.459080 1844089 certs.go:257] generating profile certs ...
	I1124 09:47:04.459195 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:47:04.459263 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:47:04.459319 1844089 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:47:04.459333 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1124 09:47:04.459352 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1124 09:47:04.459364 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1124 09:47:04.459374 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1124 09:47:04.459384 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1124 09:47:04.459403 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1124 09:47:04.459415 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1124 09:47:04.459426 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1124 09:47:04.459482 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:47:04.459525 1844089 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:04.459534 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:47:04.459574 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:04.459609 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:04.459638 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:04.459701 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:04.459738 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.459752 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem -> /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.459763 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.460411 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:04.483964 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:04.505086 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:04.526066 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:04.552811 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:47:04.572010 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:47:04.590830 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:04.609063 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:47:04.627178 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:04.645228 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:47:04.662875 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:47:04.680934 1844089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:04.694072 1844089 ssh_runner.go:195] Run: openssl version
	I1124 09:47:04.700410 1844089 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1124 09:47:04.700488 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:47:04.708800 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712351 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712441 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712518 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.755374 1844089 command_runner.go:130] > 3ec20f2e
	I1124 09:47:04.755866 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:04.763956 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:04.772579 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776497 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776523 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776574 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.817126 1844089 command_runner.go:130] > b5213941
	I1124 09:47:04.817555 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:04.825631 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:47:04.834323 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838391 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838437 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838503 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.879479 1844089 command_runner.go:130] > 51391683
	I1124 09:47:04.879964 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:04.888201 1844089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892298 1844089 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892323 1844089 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1124 09:47:04.892330 1844089 command_runner.go:130] > Device: 259,1	Inode: 1049847     Links: 1
	I1124 09:47:04.892337 1844089 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:04.892344 1844089 command_runner.go:130] > Access: 2025-11-24 09:42:55.781942492 +0000
	I1124 09:47:04.892349 1844089 command_runner.go:130] > Modify: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892354 1844089 command_runner.go:130] > Change: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892360 1844089 command_runner.go:130] >  Birth: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892420 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:47:04.935687 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.935791 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:47:04.977560 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.978011 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:47:05.021496 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.021984 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:47:05.064844 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.065359 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:47:05.108127 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.108275 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:47:05.149417 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.149874 1844089 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:05.149970 1844089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:05.150065 1844089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:05.178967 1844089 cri.go:89] found id: ""
	I1124 09:47:05.179068 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:05.186015 1844089 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1124 09:47:05.186039 1844089 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1124 09:47:05.186047 1844089 command_runner.go:130] > /var/lib/minikube/etcd:
	I1124 09:47:05.187003 1844089 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:47:05.187020 1844089 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:47:05.187103 1844089 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:47:05.195380 1844089 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:47:05.195777 1844089 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-373432" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.195884 1844089 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-1804834/kubeconfig needs updating (will repair): [kubeconfig missing "functional-373432" cluster setting kubeconfig missing "functional-373432" context setting]
	I1124 09:47:05.196176 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.196576 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.196729 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.197389 1844089 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 09:47:05.197410 1844089 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 09:47:05.197417 1844089 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 09:47:05.197421 1844089 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 09:47:05.197425 1844089 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 09:47:05.197478 1844089 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1124 09:47:05.197834 1844089 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:47:05.206841 1844089 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1124 09:47:05.206877 1844089 kubeadm.go:602] duration metric: took 19.851198ms to restartPrimaryControlPlane
	I1124 09:47:05.206901 1844089 kubeadm.go:403] duration metric: took 57.044926ms to StartCluster
	I1124 09:47:05.206915 1844089 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.206989 1844089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.207632 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.208100 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:05.207869 1844089 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:05.208216 1844089 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:05.208554 1844089 addons.go:70] Setting storage-provisioner=true in profile "functional-373432"
	I1124 09:47:05.208570 1844089 addons.go:239] Setting addon storage-provisioner=true in "functional-373432"
	I1124 09:47:05.208595 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.208650 1844089 addons.go:70] Setting default-storageclass=true in profile "functional-373432"
	I1124 09:47:05.208696 1844089 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-373432"
	I1124 09:47:05.208964 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.209057 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.215438 1844089 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:05.218563 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:05.247382 1844089 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:05.249311 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.249495 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.249781 1844089 addons.go:239] Setting addon default-storageclass=true in "functional-373432"
	I1124 09:47:05.249815 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.250242 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.250436 1844089 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.250452 1844089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:05.250491 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.282635 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.300501 1844089 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:05.300528 1844089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:05.300592 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.336568 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.425988 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:05.454084 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.488439 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.208671 1844089 node_ready.go:35] waiting up to 6m0s for node "functional-373432" to be "Ready" ...
	I1124 09:47:06.208714 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208746 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208771 1844089 retry.go:31] will retry after 239.578894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.208823 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208836 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208841 1844089 retry.go:31] will retry after 363.194189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208887 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.209209 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.448577 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:06.513317 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.513406 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.513430 1844089 retry.go:31] will retry after 455.413395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.572567 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.636310 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.636351 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.636371 1844089 retry.go:31] will retry after 493.81878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.709791 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.969606 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.043721 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.043767 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.043786 1844089 retry.go:31] will retry after 737.997673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.130919 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.189702 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.189740 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.189777 1844089 retry.go:31] will retry after 362.835066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.209918 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.209989 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.210325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.552843 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.609433 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.612888 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.612921 1844089 retry.go:31] will retry after 813.541227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.709061 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.709150 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.709464 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.782677 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.840776 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.844096 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.844127 1844089 retry.go:31] will retry after 1.225797654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.209825 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.209923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.210302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:08.210357 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:08.426707 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:08.489610 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:08.489648 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.489666 1844089 retry.go:31] will retry after 1.230621023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.709036 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.709146 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.709492 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.070184 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:09.132816 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.132856 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.132877 1844089 retry.go:31] will retry after 1.628151176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.209565 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.709579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.709673 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.721235 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:09.779532 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.779572 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.779591 1844089 retry.go:31] will retry after 1.535326746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.208957 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:10.709858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.709945 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.710278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:10.710329 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:10.761451 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:10.821517 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:10.825161 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.825191 1844089 retry.go:31] will retry after 2.22755575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.209753 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.209827 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.210169 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:11.315630 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:11.371370 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:11.375223 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.375258 1844089 retry.go:31] will retry after 3.052255935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.709710 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.709783 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.710113 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.208839 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.208935 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.209276 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.709439 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:13.052884 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:13.107513 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:13.110665 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.110696 1844089 retry.go:31] will retry after 2.047132712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.208986 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:13.209499 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:13.708863 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.708946 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.709225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.428018 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:14.497830 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:14.500554 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.500586 1844089 retry.go:31] will retry after 5.866686171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.708931 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.158123 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.208926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.209197 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.236504 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:15.240097 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.240134 1844089 retry.go:31] will retry after 4.86514919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.710246 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:15.710298 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:16.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.209082 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:16.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.709395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.209050 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.708987 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:18.208849 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.208918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.209189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:18.209229 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:18.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.708962 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.709278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.709232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:20.105978 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:20.163220 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.166411 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.166455 1844089 retry.go:31] will retry after 7.973407294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.209623 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.209700 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.210040 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:20.210093 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:20.367494 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:20.426176 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.426221 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.426244 1844089 retry.go:31] will retry after 7.002953248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.709786 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.710109 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.208846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.208922 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.709365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.209249 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.209597 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.709231 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.709348 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.709682 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:22.709735 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:23.209559 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.209633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.209953 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:23.709725 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.710141 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.209255 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.708973 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.709052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:25.209389 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.209467 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.209841 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:25.209903 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:25.709642 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.709719 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.209709 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.210119 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.709913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.709992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.710307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.208828 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.208902 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.209226 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.429779 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:27.489021 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:27.489061 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.489078 1844089 retry.go:31] will retry after 11.455669174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.709620 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.709697 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.710061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:27.710112 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:28.140690 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:28.207909 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:28.207963 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.207981 1844089 retry.go:31] will retry after 7.295318191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.209358 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:28.709045 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.709130 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.709479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.209267 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.209347 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.209673 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.709959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.710312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:29.710375 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:30.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.209713 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.210010 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:30.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.208899 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.709282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:32.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.209035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.209376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:32.209432 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:32.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.709024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.208858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.208927 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.209204 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.709003 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:34.208983 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.209403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:34.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:34.709379 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.709553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.709927 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.209738 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.209811 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.210108 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.503497 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:35.564590 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:35.564633 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.564653 1844089 retry.go:31] will retry after 18.757863028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.709881 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.709958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.710297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.208842 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.208909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.209196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.708965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.709288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:36.709337 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:37.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.209034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:37.708926 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.708999 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:38.709418 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:38.945958 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:39.002116 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:39.006563 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.006598 1844089 retry.go:31] will retry after 17.731618054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.209830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.210101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:39.708971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.709049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.209137 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.709279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.709607 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:40.709669 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:41.209237 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:41.709465 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.709538 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.709862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.209660 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.209740 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.210065 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.709826 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.710247 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:42.710300 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:43.208851 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.208929 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.209238 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:43.708832 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.708904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.709198 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.209292 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.709200 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.709637 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:45.209579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.209674 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.210095 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:45.210174 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:45.708846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.708926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.709257 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.709348 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.208969 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:47.709460 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:48.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:48.708913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.708985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.709311 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.209041 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.709341 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.709413 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:49.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:50.209504 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.209579 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.209916 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:50.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.709795 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.209819 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.210144 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.708840 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.708913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.709251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:52.208995 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.209079 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.209450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:52.209504 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:52.709193 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.709263 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.709579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.209019 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.709514 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.208914 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.208983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.323627 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:54.379391 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:54.382809 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.382842 1844089 retry.go:31] will retry after 21.097681162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.709482 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.709561 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.709905 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:54.709960 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:55.209834 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.209915 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.210225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:55.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.708984 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.709297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.209078 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.709184 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.709266 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.709603 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.738841 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:56.794457 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:56.797830 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:56.797870 1844089 retry.go:31] will retry after 32.033139138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:57.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.209553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.209864 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:57.209918 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:57.709718 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.709790 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.710100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.209898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.209970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.210337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.709037 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.709135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.209241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.209573 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.709578 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.709657 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.710027 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:59.710084 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:00.211215 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.211305 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.211621 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:00.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.208998 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.209081 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.708891 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.708967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:02.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.209136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.209526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:02.209599 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:02.709293 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.709375 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.709754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.209529 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.209595 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.209866 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.709708 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.709780 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.710093 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:04.209893 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.209965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.210332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:04.210385 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:04.709021 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.709445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.209464 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.209551 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.209872 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.709670 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.709745 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.710155 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.209763 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.209847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.708847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.708923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.709285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:06.709340 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:07.208931 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:07.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.709326 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.709122 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.709201 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.709539 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:08.709592 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:09.209218 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.209284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.209536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:09.709509 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.709587 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.709963 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.209602 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.209679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.209999 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.709702 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.709772 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.710032 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:10.710072 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:11.209870 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.209951 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.210285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:11.708984 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.208994 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:13.209062 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.209163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:13.209567 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:13.709210 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.709665 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.209027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.708929 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:15.209506 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.209583 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.209851 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:15.209900 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:15.481440 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:15.543475 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:15.543517 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.543536 1844089 retry.go:31] will retry after 17.984212056s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.709841 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.709917 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.710203 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.208972 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.209053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.209359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.708920 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.709254 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.209025 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.209445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.709181 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.709254 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.709571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:17.709636 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:18.209204 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.209276 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:18.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.209167 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.209240 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.709543 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.709616 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.709867 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:19.709908 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:20.209743 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.209813 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.210142 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:20.708844 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.708918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.709248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.709064 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:22.209022 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.209096 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.209401 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:22.209447 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:22.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.209381 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.709077 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.709165 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.709527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:24.209256 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.209332 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:24.209710 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:24.709523 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.709594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.709919 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.209714 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.209794 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.210176 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.709866 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.709934 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.710232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.709174 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.709252 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.709562 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:26.709621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:27.209207 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.209330 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.209681 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:27.709493 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.709901 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.209534 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.209607 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.209945 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.709691 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:28.710042 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:28.831261 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:48:28.892751 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892791 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892882 1844089 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:29.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:29.709415 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.709488 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.709832 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.209666 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.209735 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.209996 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.709837 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.709912 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.710250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:30.710310 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:31.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.209451 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:31.708995 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.709068 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.209127 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.209200 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.209540 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.709251 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.709359 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.709688 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:33.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:33.209573 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:33.528038 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:33.587216 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587268 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587355 1844089 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:33.590586 1844089 out.go:179] * Enabled addons: 
	I1124 09:48:33.594109 1844089 addons.go:530] duration metric: took 1m28.385890989s for enable addons: enabled=[]
	I1124 09:48:33.709504 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.709580 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.709909 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.209684 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.209763 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.210103 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:35.209792 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.209867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.210196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:35.210254 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:35.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.209290 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.708942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.209089 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.209182 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:37.709398 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:38.208956 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.209049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.209393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:38.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.209063 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.209144 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.209398 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.709762 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:39.709826 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:40.209362 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.209445 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.209801 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:40.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.709695 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.710016 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.209808 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.209911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.210242 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.708947 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.709450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:42.209333 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.209441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.209737 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:42.209782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:42.709513 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.709593 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.709913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.209705 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.209787 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.210136 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.709811 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.709882 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.710135 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.208840 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.208916 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.709434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:44.709491 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:45.209557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.210004 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:45.709853 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.709947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.710263 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.708971 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:47.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:47.209423 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:47.708928 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.709090 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.709181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.709512 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:49.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:49.209487 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:49.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.209043 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.208831 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.209321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.708959 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:51.709417 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:52.209136 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.209591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:52.709205 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.709536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.209062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.709175 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.709255 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.709599 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:53.709661 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:54.209206 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.209288 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:54.709557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.709679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.709998 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.209740 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.210158 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.708864 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:56.208988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.209080 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:56.209502 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:56.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.709658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.209431 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.209503 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.209825 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.709393 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.709781 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:58.209591 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.209670 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.210036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:58.210095 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:58.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.709861 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.208919 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.709435 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.709520 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.709836 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:00.209722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.210110 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:00.210156 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:00.709882 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.709966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.710301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.208997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.709044 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.709069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:02.709406 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:03.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.209309 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:03.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.709027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.709334 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.208982 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.209059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.709678 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:04.709782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:05.209548 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.209645 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.209977 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:05.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.710166 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.208981 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.209051 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.209332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.709332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:07.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.209086 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.209494 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:07.209563 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:07.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.709391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.209399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.709011 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.709085 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.209052 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.209488 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.709362 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.709442 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.709796 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:09.709855 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:10.209613 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.209690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:10.709735 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.709803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.209881 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.209958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.210304 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.709359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:12.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:12.209396 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:12.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.709325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.209056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.209385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.709008 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.709380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:14.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.209238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.209577 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:14.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:14.709397 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.709478 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.709814 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.209760 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.210102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.709949 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.710282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.209016 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.709074 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.709163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.709419 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:16.709459 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:17.209141 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.209215 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:17.709286 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.709666 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.209424 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.209499 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.209754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.709505 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.709585 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.709897 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:18.709953 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:19.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.209779 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.210117 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:19.709834 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:21.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:21.209415 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:21.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.709029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.209126 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.209204 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.209575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.709550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:23.209231 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.209670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:23.209763 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:23.709555 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.709633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.709995 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.209767 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.209841 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.709526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:25.209328 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.209411 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.209756 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:25.209816 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:25.709508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.709600 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.709938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.209774 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.209856 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.210202 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.709369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:27.209746 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.210131 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:27.210184 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:27.708830 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.708905 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.208880 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.209307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.709007 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.709327 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.209020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.709345 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.709441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.709777 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:29.709838 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:30.209612 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.209687 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.209958 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:30.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.709798 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.710129 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.208884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.209299 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.708974 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:32.208916 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.208993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:32.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:32.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.208919 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.208994 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.209330 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.709056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.709413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:34.209151 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.209227 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:34.209646 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:34.709436 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.709506 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.709774 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.209725 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.209803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.210160 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.708884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.708977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.208912 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.209323 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.709458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:36.709524 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:37.209047 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.209151 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:37.709220 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.709324 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.709631 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.209508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.209592 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.209964 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.709785 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.709869 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.710199 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:38.710257 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:39.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.208884 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.209168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:39.709057 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.709156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.709501 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.209097 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.209195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.709222 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.709295 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.709630 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:41.209317 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.209397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.209747 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:41.209802 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:41.709569 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.709654 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.709993 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.209817 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.209904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.210200 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.209070 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.709575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:43.709620 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:44.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:44.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.709401 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.709783 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.209860 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.209959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.210271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.708945 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:46.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.209515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:46.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:46.709202 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.709268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.209384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.709402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.209161 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.209414 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.709091 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.709194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.709569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:48.709627 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:49.209307 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.209384 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.209719 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:49.709527 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.709599 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.709865 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.209620 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.209699 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.210039 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.709717 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.709799 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:50.710183 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:51.208825 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.208894 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.209172 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:51.708925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.709349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.708893 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:53.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.209349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:53.209399 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:53.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.208920 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.209318 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.709373 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.709458 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.709760 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:55.209592 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.209978 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:55.210040 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:55.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.710161 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.208943 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.209271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.708876 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.708959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.208866 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.209285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.708997 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.709427 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:57.709482 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:58.209166 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.209246 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.209658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:58.709454 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.709524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.709780 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.209521 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.209598 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.209934 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.709770 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.709854 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.710168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:59.710230 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:00.208926 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.210913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:50:00.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.709842 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.710201 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.209315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.709093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:02.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.209057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:02.209542 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:02.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.709389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.209380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.708939 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.709357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.208970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.209268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.709182 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.709269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.709623 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:04.709678 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:05.209442 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.209524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.209862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:05.709612 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.709690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.710022 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.209806 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.209880 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.210219 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:07.209084 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.209187 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.209448 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:07.209497 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:07.709139 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.709341 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.209829 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.209903 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.708897 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.708964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.709236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.208927 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.209002 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.209378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.708935 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:09.709424 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:10.208903 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.209331 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:10.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.709423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.209530 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.709132 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.709202 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:11.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:12.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:12.709068 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.709177 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.709636 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.209220 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.209299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.209571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:14.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.209025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:14.209433 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:14.708909 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.708988 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.709306 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.209826 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.210152 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.708902 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.208905 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.208978 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.209278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.708874 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.709267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:16.709311 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:17.208877 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.209356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:17.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.209413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.709157 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.709238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:18.709645 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:19.209201 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.209269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.209518 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:19.709485 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.709558 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.709880 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.209636 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.209974 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.709755 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.709829 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.710090 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:20.710130 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:21.209835 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.209913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:21.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.709338 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.208900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.208981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.709058 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:23.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.209616 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:23.209677 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:23.709211 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.708953 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.209203 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.209580 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.709392 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.709705 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:25.709765 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:26.209510 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.209594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.209928 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:26.709733 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.209837 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.209926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.210235 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.709350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:28.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.209251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:28.209296 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:28.709016 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.709092 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.709432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.709708 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:30.209514 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.209603 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.209930 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:30.209989 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:30.709705 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.709782 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.710096 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.209823 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.210153 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.709337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.209065 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.209484 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:32.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:33.209221 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.209638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:33.709229 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.709309 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.709638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.209279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.209527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.709451 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.709526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.709824 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:34.709870 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:35.209712 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.210156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:35.709774 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.710101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.208924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.209266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.708961 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.709411 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:37.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.208992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.209261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:37.209303 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:37.708946 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.209345 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.709003 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.709091 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.709404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:39.209187 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.209613 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:39.209672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:39.709433 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.709508 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.709838 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.209598 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.209675 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.709773 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.709855 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.710189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.208908 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:41.709318 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:42.209001 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.209093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:42.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.709286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.709587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.209235 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.209303 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.709313 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.709652 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:43.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:44.209469 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.209542 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.209879 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:44.709684 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.709755 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.710023 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.208845 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.208942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.709723 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.709804 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.710156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:45.710211 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:46.208872 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.208948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.209249 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:46.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.208964 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.709147 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.709424 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:48.209095 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.209192 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:48.209580 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:48.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.709378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.209077 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.709414 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.709491 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.709816 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:50.209655 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.209742 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.210066 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:50.210123 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:50.709861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.709937 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.710188 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.208878 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.208952 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.209322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.708914 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.708993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.208985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.709023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.709362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:52.709420 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:53.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.209038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.209404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:53.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.708997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.709294 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.709054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.709449 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:54.709516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:55.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.209634 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.209938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:55.709754 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.709830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.710148 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.208939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:57.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.209386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:57.209445 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:57.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.709399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.209082 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.209168 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.209479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.709393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.208975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.209052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.708894 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.709244 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:59.709289 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:00.209000 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.209097 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:00.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.209068 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.209486 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:01.709394 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:02.208943 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.209065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:02.708872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.708947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.709229 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:03.208953 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.210127 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:51:03.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.708939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.709302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:04.209006 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.209406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:04.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:04.709392 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.709474 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.709835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.209403 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.209479 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.209835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.709680 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.709766 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.710028 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:06.209869 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.209955 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.210295 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:06.210355 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:06.708964 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.709046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.709408 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.708938 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.209150 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.209225 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.709219 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.709289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.709627 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:08.709719 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:09.209522 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.209981 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:09.709768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.709843 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.208987 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:11.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.209070 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:11.209465 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:11.708886 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.708972 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.709239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.208935 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.708976 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.208896 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:13.709391 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:14.208984 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:14.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.709615 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.209618 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.209698 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.210033 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.709832 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.709911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.710236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:15.710293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:16.208918 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:16.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.709009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.709328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.209046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.209357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.708924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.709185 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:18.208887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.208963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.209319 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:18.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:18.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.709038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.208917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.709241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.709591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:20.209330 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.209415 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:20.209872 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:20.709638 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.709728 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.209872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.209964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.210347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.709162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.709523 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.209023 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.209095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.209382 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.709055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:22.709477 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:23.209195 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.209283 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:23.709228 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.709557 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.709350 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.709431 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.709744 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:24.709799 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:25.209704 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.210041 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:25.709815 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.709891 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.209912 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.209990 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.210312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.708887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.708968 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.709274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:27.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:27.209444 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:27.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.209045 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.209134 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:29.209164 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:29.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:29.709432 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.709510 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.709795 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.209546 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.209973 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.709624 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.709702 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.710036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:31.209789 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.209866 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.210145 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:31.210192 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:31.708857 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.709271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.208962 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.209409 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.708881 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.708953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.709262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.709012 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:33.709407 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:34.209051 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.209156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:34.709486 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.709969 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.209768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.209850 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.210220 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.709035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:36.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.209372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:36.209421 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:36.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.709519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.208947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.209239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.709416 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:38.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.209257 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:38.209647 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:38.709163 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.709230 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.208929 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.209009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.209347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.709324 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.709397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.709728 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:40.209489 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.209557 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.209830 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:40.209876 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:40.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.709707 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.710055 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.209685 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.209762 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.210061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.709752 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.709828 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.710112 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.709372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:42.709426 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:43.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:43.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.709026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.209194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.209587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.709472 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.709546 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.709820 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:44.709861 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:45.209849 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.209939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.210268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:45.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.709006 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.709307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.209264 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.709403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:47.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.209219 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.209569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:47.209622 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:47.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.709312 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.709563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.208991 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.709134 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.709207 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.709500 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.209300 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:49.709409 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:50.209121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.209206 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:50.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.709261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.209021 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.209129 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.209441 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.709189 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.709265 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.709596 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:51.709649 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:52.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.209290 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.209551 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:52.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.209080 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.209550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.708975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.709317 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.209337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:54.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:54.708980 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.209380 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.209452 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.209779 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.709062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.709456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:56.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.209063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:56.209437 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:56.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.709867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.209908 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.209985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.210307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:58.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.209185 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:58.209485 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:58.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.209363 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.708916 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.709322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.209018 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.709370 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.709700 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:00.709758 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:01.209455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.209526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.209787 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:01.709649 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.709729 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.209841 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.209925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.210265 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.709293 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:03.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.209407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:03.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:03.709146 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.709228 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.709570 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.209286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.209544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.709581 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.709667 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.710009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.208954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.708991 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.709066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:05.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:06.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:06.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.709218 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.709559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.209217 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.209291 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.209612 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.709400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:08.209119 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.209197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:08.209621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:08.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.709292 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:10.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:11.209191 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.209268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.209610 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:11.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.709609 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.208967 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.709039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:13.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.209308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:13.209350 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:13.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.709140 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.709483 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.709549 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.709685 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.710128 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:15.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.209289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.209620 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:15.209679 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:15.709455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.709531 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.709878 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.209651 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.209725 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.209983 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.710195 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:17.709412 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:18.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.208939 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.209302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.709272 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.709356 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:19.709724 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:20.209502 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.209578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.209951 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:20.709782 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.710102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.209876 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.209953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.210310 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.708981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.709321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:22.208895 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.208966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.209252 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:22.209293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:22.709034 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.709136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.709493 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.209350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.708983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.709272 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:24.208934 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.209013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:24.209428 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:24.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.709396 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.209216 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.209546 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.709028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:26.209056 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.209172 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:26.209508 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:26.708880 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.708948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.709291 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.709023 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.709120 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.209060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.209160 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:28.709443 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:29.209162 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.209244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:29.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.709568 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.709818 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.209668 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.209750 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.210098 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.708867 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.708942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:31.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.208986 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.209328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:31.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:31.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.709072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.709157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:33.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:33.209513 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:33.709035 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.709137 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.208964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.209274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.709168 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.709244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:35.209409 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.209492 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.209807 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:35.209852 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:35.709526 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.709597 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.709869 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.209708 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.210043 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.710262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.209297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:37.709440 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:38.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.209069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:38.708972 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.709373 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.709697 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:39.709756 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:40.209475 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.209550 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.209908 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:40.709677 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.709752 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.710115 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.209759 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.210192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.708889 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.709284 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:42.208977 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:42.209516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:42.709031 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.709125 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.709477 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.209073 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.209164 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.209444 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.709305 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.709379 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.709632 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:44.709672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:45.209865 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.210034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.211000 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:45.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.709376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.209157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.209473 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.709360 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:47.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.209066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.209434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:47.209489 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:47.709149 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.709220 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.709470 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.209367 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.208882 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.208956 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.209248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.708923 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:49.709401 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:50.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.209015 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.209369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:50.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.709142 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.709429 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.209160 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.209581 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.709273 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.709351 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.709670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:51.709725 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:52.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.209549 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.209889 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:52.709740 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.709823 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.710180 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.209005 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.709060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:54.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:54.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:54.708969 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.709044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.709387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.209314 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.209382 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.209635 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.709021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:56.209074 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:56.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:56.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.709266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.709098 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.709195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.709513 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.208958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.209260 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.708954 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.709420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:58.709478 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:59.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:59.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.709259 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.709688 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.710034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:00.710088 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:01.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.209777 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.210034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:01.709784 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.709858 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.710223 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.209882 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.209960 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.210301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.708970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:03.209442 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:03.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.209135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.209531 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.709573 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.709992 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:05.209202 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.209287 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.209886 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:05.209937 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:05.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.709000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:06.209058 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:06.209140 1844089 node_ready.go:38] duration metric: took 6m0.000414768s for node "functional-373432" to be "Ready" ...
	I1124 09:53:06.212349 1844089 out.go:203] 
	W1124 09:53:06.215554 1844089 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1124 09:53:06.215587 1844089 out.go:285] * 
	* 
	W1124 09:53:06.217723 1844089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:53:06.220637 1844089 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-373432 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m7.498314453s for "functional-373432" cluster.
I1124 09:53:06.829882 1806704 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (299.097128ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 logs -n 25: (1.029298914s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image save kicbase/echo-server:functional-498341 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image rm kicbase/echo-server:functional-498341 --alsologtostderr                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image save --daemon kicbase/echo-server:functional-498341 --alsologtostderr                                                             │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/1806704.pem                                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/1806704.pem                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/18067042.pem                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/18067042.pem                                                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/test/nested/copy/1806704/hosts                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format short --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format yaml --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh pgrep buildkitd                                                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │                     │
	│ image          │ functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ delete         │ -p functional-498341                                                                                                                                      │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │ 24 Nov 25 09:38 UTC │
	│ start          │ -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │                     │
	│ start          │ -p functional-373432 --alsologtostderr -v=8                                                                                                               │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:46:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:46:59.387016 1844089 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:46:59.387211 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387243 1844089 out.go:374] Setting ErrFile to fd 2...
	I1124 09:46:59.387263 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387557 1844089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:46:59.388008 1844089 out.go:368] Setting JSON to false
	I1124 09:46:59.388882 1844089 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30570,"bootTime":1763947050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:46:59.388979 1844089 start.go:143] virtualization:  
	I1124 09:46:59.392592 1844089 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:46:59.396303 1844089 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:46:59.396370 1844089 notify.go:221] Checking for updates...
	I1124 09:46:59.402093 1844089 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:46:59.405033 1844089 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:46:59.407908 1844089 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:46:59.411405 1844089 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:46:59.414441 1844089 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:46:59.417923 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:46:59.418109 1844089 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:46:59.451337 1844089 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:46:59.451452 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.507906 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.498692309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.508018 1844089 docker.go:319] overlay module found
	I1124 09:46:59.511186 1844089 out.go:179] * Using the docker driver based on existing profile
	I1124 09:46:59.514098 1844089 start.go:309] selected driver: docker
	I1124 09:46:59.514123 1844089 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.514235 1844089 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:46:59.514350 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.569823 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.559648119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.570237 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:46:59.570306 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:46:59.570363 1844089 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.573590 1844089 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:46:59.576497 1844089 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:46:59.579448 1844089 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:46:59.582547 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:46:59.582648 1844089 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:46:59.602755 1844089 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:46:59.602781 1844089 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:46:59.648405 1844089 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:46:59.826473 1844089 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:46:59.826636 1844089 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:46:59.826856 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:46:59.826893 1844089 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:46:59.826927 1844089 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:46:59.826975 1844089 start.go:364] duration metric: took 25.756µs to acquireMachinesLock for "functional-373432"
	I1124 09:46:59.826990 1844089 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:46:59.826996 1844089 fix.go:54] fixHost starting: 
	I1124 09:46:59.827258 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:46:59.843979 1844089 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:46:59.844011 1844089 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:46:59.847254 1844089 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:46:59.847299 1844089 machine.go:94] provisionDockerMachine start ...
	I1124 09:46:59.847379 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:46:59.872683 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:46:59.873034 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:46:59.873051 1844089 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:46:59.992797 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.044426 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.044454 1844089 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:47:00.044547 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.104810 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.105156 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.105170 1844089 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:47:00.386378 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.386611 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.409023 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.411110 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.411442 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.411467 1844089 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:00.595280 1844089 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595319 1844089 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595392 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:47:00.595381 1844089 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595403 1844089 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 139.325µs
	I1124 09:47:00.595412 1844089 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595423 1844089 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595434 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:47:00.595442 1844089 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 62.902µs
	I1124 09:47:00.595450 1844089 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595457 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:47:00.595463 1844089 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 41.207µs
	I1124 09:47:00.595469 1844089 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:47:00.595461 1844089 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595477 1844089 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595494 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:47:00.595500 1844089 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.394µs
	I1124 09:47:00.595507 1844089 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595510 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:47:00.595517 1844089 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595524 1844089 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.5µs
	I1124 09:47:00.595532 1844089 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595282 1844089 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595546 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:47:00.595552 1844089 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 36.923µs
	I1124 09:47:00.595556 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:47:00.595558 1844089 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:47:00.595562 1844089 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 302.437µs
	I1124 09:47:00.595572 1844089 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:47:00.595568 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:47:00.595581 1844089 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 263.856µs
	I1124 09:47:00.595587 1844089 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:47:00.595593 1844089 cache.go:87] Successfully saved all images to host disk.
	I1124 09:47:00.596331 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:00.596354 1844089 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:47:00.596379 1844089 ubuntu.go:190] setting up certificates
	I1124 09:47:00.596403 1844089 provision.go:84] configureAuth start
	I1124 09:47:00.596480 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:00.614763 1844089 provision.go:143] copyHostCerts
	I1124 09:47:00.614805 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614845 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:47:00.614865 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614942 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:47:00.615049 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615076 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:47:00.615081 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615111 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:47:00.615166 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615187 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:47:00.615191 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615218 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:47:00.615273 1844089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:47:00.746073 1844089 provision.go:177] copyRemoteCerts
	I1124 09:47:00.746146 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:00.746187 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.767050 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:00.873044 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1124 09:47:00.873153 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:00.891124 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1124 09:47:00.891207 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:47:00.909032 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1124 09:47:00.909209 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:47:00.927426 1844089 provision.go:87] duration metric: took 330.992349ms to configureAuth
	I1124 09:47:00.927482 1844089 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:47:00.927686 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:00.927808 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.945584 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.945906 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.945929 1844089 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:01.279482 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:01.279511 1844089 machine.go:97] duration metric: took 1.432203745s to provisionDockerMachine
	I1124 09:47:01.279522 1844089 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:47:01.279534 1844089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:01.279608 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:01.279659 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.306223 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.413310 1844089 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:01.416834 1844089 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1124 09:47:01.416855 1844089 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1124 09:47:01.416859 1844089 command_runner.go:130] > VERSION_ID="12"
	I1124 09:47:01.416863 1844089 command_runner.go:130] > VERSION="12 (bookworm)"
	I1124 09:47:01.416868 1844089 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1124 09:47:01.416884 1844089 command_runner.go:130] > ID=debian
	I1124 09:47:01.416889 1844089 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1124 09:47:01.416894 1844089 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1124 09:47:01.416900 1844089 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1124 09:47:01.416956 1844089 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:47:01.416971 1844089 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:47:01.416982 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:47:01.417038 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:47:01.417141 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:47:01.417149 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /etc/ssl/certs/18067042.pem
	I1124 09:47:01.417225 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:47:01.417238 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> /etc/test/nested/copy/1806704/hosts
	I1124 09:47:01.417285 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:47:01.425057 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:01.443829 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:47:01.461688 1844089 start.go:296] duration metric: took 182.151565ms for postStartSetup
	I1124 09:47:01.461806 1844089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:47:01.461866 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.478949 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.582285 1844089 command_runner.go:130] > 19%
	I1124 09:47:01.582359 1844089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:47:01.587262 1844089 command_runner.go:130] > 159G
	I1124 09:47:01.587296 1844089 fix.go:56] duration metric: took 1.760298367s for fixHost
	I1124 09:47:01.587308 1844089 start.go:83] releasing machines lock for "functional-373432", held for 1.76032423s
	I1124 09:47:01.587385 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:01.605227 1844089 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:01.605290 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.605558 1844089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:01.605651 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.623897 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.640948 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.724713 1844089 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1763789673-21948", "minikube_version": "v1.37.0", "commit": "2996c7ec74d570fa8ab37e6f4f8813150d0c7473"}
	I1124 09:47:01.724863 1844089 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:01.812522 1844089 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1124 09:47:01.816014 1844089 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1124 09:47:01.816053 1844089 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1124 09:47:01.816128 1844089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:01.851397 1844089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1124 09:47:01.855673 1844089 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1124 09:47:01.855841 1844089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:01.855908 1844089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:01.863705 1844089 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:47:01.863730 1844089 start.go:496] detecting cgroup driver to use...
	I1124 09:47:01.863762 1844089 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:47:01.863809 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:01.879426 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:01.892902 1844089 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:01.892974 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:01.908995 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:01.922294 1844089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:02.052541 1844089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:02.189051 1844089 docker.go:234] disabling docker service ...
	I1124 09:47:02.189218 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:02.205065 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:02.219126 1844089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:02.329712 1844089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:02.449311 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:02.462019 1844089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:02.474641 1844089 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1124 09:47:02.476035 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:02.633334 1844089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:02.633408 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.642946 1844089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:02.643028 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.652272 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.661578 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.670499 1844089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:02.678769 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.688087 1844089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.696980 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.705967 1844089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:02.713426 1844089 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1124 09:47:02.713510 1844089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:02.720989 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:02.841969 1844089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:03.036830 1844089 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:03.036905 1844089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:03.040587 1844089 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1124 09:47:03.040611 1844089 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1124 09:47:03.040618 1844089 command_runner.go:130] > Device: 0,72	Inode: 1805        Links: 1
	I1124 09:47:03.040633 1844089 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:03.040639 1844089 command_runner.go:130] > Access: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040645 1844089 command_runner.go:130] > Modify: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040654 1844089 command_runner.go:130] > Change: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040658 1844089 command_runner.go:130] >  Birth: -
	I1124 09:47:03.041299 1844089 start.go:564] Will wait 60s for crictl version
	I1124 09:47:03.041375 1844089 ssh_runner.go:195] Run: which crictl
	I1124 09:47:03.044736 1844089 command_runner.go:130] > /usr/local/bin/crictl
	I1124 09:47:03.045405 1844089 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:47:03.072144 1844089 command_runner.go:130] > Version:  0.1.0
	I1124 09:47:03.072339 1844089 command_runner.go:130] > RuntimeName:  cri-o
	I1124 09:47:03.072489 1844089 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1124 09:47:03.072634 1844089 command_runner.go:130] > RuntimeApiVersion:  v1
	I1124 09:47:03.075078 1844089 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:47:03.075181 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.102664 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.102689 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.102697 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.102702 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.102708 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.102713 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.102717 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.102722 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.102726 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.102730 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.102734 1844089 command_runner.go:130] >      static
	I1124 09:47:03.102737 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.102741 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.102745 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.102753 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.102757 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.102763 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.102768 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.102772 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.102781 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.104732 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.133953 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.133980 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.133987 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.133991 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.133996 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.134000 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.134004 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.134008 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.134012 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.134016 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.134019 1844089 command_runner.go:130] >      static
	I1124 09:47:03.134023 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.134027 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.134031 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.134039 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.134043 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.134050 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.134056 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.134060 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.134068 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.140942 1844089 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:47:03.143873 1844089 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:47:03.160952 1844089 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:03.165052 1844089 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1124 09:47:03.165287 1844089 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:03.165490 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.325050 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.479106 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.632699 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:03.632773 1844089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:03.664623 1844089 command_runner.go:130] > {
	I1124 09:47:03.664647 1844089 command_runner.go:130] >   "images":  [
	I1124 09:47:03.664652 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664661 1844089 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1124 09:47:03.664666 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664683 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1124 09:47:03.664695 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664705 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664715 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1124 09:47:03.664722 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664727 1844089 command_runner.go:130] >       "size":  "29035622",
	I1124 09:47:03.664734 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664738 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664746 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664750 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664760 1844089 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1124 09:47:03.664768 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664775 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1124 09:47:03.664780 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664788 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664797 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1124 09:47:03.664804 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664808 1844089 command_runner.go:130] >       "size":  "74488375",
	I1124 09:47:03.664816 1844089 command_runner.go:130] >       "username":  "nonroot",
	I1124 09:47:03.664820 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664827 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664831 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664838 1844089 command_runner.go:130] >       "id":  "1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca",
	I1124 09:47:03.664845 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664851 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.24-0"
	I1124 09:47:03.664855 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664859 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664873 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:62cae8d38d7e1187ef2841ebc55bef1c5a46f21a69675fae8351f92d3a3e9bc6"
	I1124 09:47:03.664880 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664885 1844089 command_runner.go:130] >       "size":  "63341525",
	I1124 09:47:03.664892 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.664896 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.664904 1844089 command_runner.go:130] >       },
	I1124 09:47:03.664908 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664923 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664929 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664932 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664939 1844089 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1124 09:47:03.664947 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664951 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1124 09:47:03.664959 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664963 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664974 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1124 09:47:03.664987 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1124 09:47:03.664994 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664999 1844089 command_runner.go:130] >       "size":  "60857170",
	I1124 09:47:03.665002 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665009 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665013 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665016 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665020 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665024 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665028 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665039 1844089 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1124 09:47:03.665043 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665053 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1124 09:47:03.665057 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665065 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665078 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1124 09:47:03.665085 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665089 1844089 command_runner.go:130] >       "size":  "84947242",
	I1124 09:47:03.665093 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665131 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665140 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665144 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665148 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665155 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665163 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665174 1844089 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1124 09:47:03.665181 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665187 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1124 09:47:03.665195 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665198 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665206 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1124 09:47:03.665213 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665217 1844089 command_runner.go:130] >       "size":  "72167568",
	I1124 09:47:03.665221 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665229 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665232 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665236 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665244 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665247 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665254 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665262 1844089 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1124 09:47:03.665269 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665275 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1124 09:47:03.665278 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665285 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665292 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1124 09:47:03.665299 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665304 1844089 command_runner.go:130] >       "size":  "74105124",
	I1124 09:47:03.665308 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665315 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665319 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665326 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665333 1844089 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1124 09:47:03.665340 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665346 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1124 09:47:03.665353 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665357 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665369 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1124 09:47:03.665376 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665380 1844089 command_runner.go:130] >       "size":  "49819792",
	I1124 09:47:03.665384 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665388 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665396 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665401 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665405 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665412 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665415 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665426 1844089 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1124 09:47:03.665434 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665439 1844089 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.665442 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665446 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665456 1844089 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1124 09:47:03.665460 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665469 1844089 command_runner.go:130] >       "size":  "517328",
	I1124 09:47:03.665473 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665478 1844089 command_runner.go:130] >         "value":  "65535"
	I1124 09:47:03.665485 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665489 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665499 1844089 command_runner.go:130] >       "pinned":  true
	I1124 09:47:03.665506 1844089 command_runner.go:130] >     }
	I1124 09:47:03.665510 1844089 command_runner.go:130] >   ]
	I1124 09:47:03.665517 1844089 command_runner.go:130] > }
	I1124 09:47:03.667798 1844089 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:47:03.667821 1844089 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:47:03.667827 1844089 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:47:03.667924 1844089 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:03.668011 1844089 ssh_runner.go:195] Run: crio config
	I1124 09:47:03.726362 1844089 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1124 09:47:03.726390 1844089 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1124 09:47:03.726403 1844089 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1124 09:47:03.726416 1844089 command_runner.go:130] > #
	I1124 09:47:03.726461 1844089 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1124 09:47:03.726469 1844089 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1124 09:47:03.726481 1844089 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1124 09:47:03.726488 1844089 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1124 09:47:03.726498 1844089 command_runner.go:130] > # reload'.
	I1124 09:47:03.726518 1844089 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1124 09:47:03.726529 1844089 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1124 09:47:03.726536 1844089 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1124 09:47:03.726563 1844089 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1124 09:47:03.726573 1844089 command_runner.go:130] > [crio]
	I1124 09:47:03.726579 1844089 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1124 09:47:03.726585 1844089 command_runner.go:130] > # containers images, in this directory.
	I1124 09:47:03.727202 1844089 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1124 09:47:03.727221 1844089 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1124 09:47:03.727766 1844089 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1124 09:47:03.727795 1844089 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1124 09:47:03.728310 1844089 command_runner.go:130] > # imagestore = ""
	I1124 09:47:03.728328 1844089 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1124 09:47:03.728337 1844089 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1124 09:47:03.728921 1844089 command_runner.go:130] > # storage_driver = "overlay"
	I1124 09:47:03.728938 1844089 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1124 09:47:03.728946 1844089 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1124 09:47:03.729270 1844089 command_runner.go:130] > # storage_option = [
	I1124 09:47:03.729595 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.729612 1844089 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1124 09:47:03.729620 1844089 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1124 09:47:03.730268 1844089 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1124 09:47:03.730286 1844089 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1124 09:47:03.730295 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1124 09:47:03.730299 1844089 command_runner.go:130] > # always happen on a node reboot
	I1124 09:47:03.730901 1844089 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1124 09:47:03.730939 1844089 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1124 09:47:03.730951 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1124 09:47:03.730957 1844089 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1124 09:47:03.731426 1844089 command_runner.go:130] > # version_file_persist = ""
	I1124 09:47:03.731444 1844089 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1124 09:47:03.731453 1844089 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1124 09:47:03.732044 1844089 command_runner.go:130] > # internal_wipe = true
	I1124 09:47:03.732064 1844089 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1124 09:47:03.732071 1844089 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1124 09:47:03.732663 1844089 command_runner.go:130] > # internal_repair = true
	I1124 09:47:03.732708 1844089 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1124 09:47:03.732717 1844089 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1124 09:47:03.732723 1844089 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1124 09:47:03.733344 1844089 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1124 09:47:03.733360 1844089 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1124 09:47:03.733364 1844089 command_runner.go:130] > [crio.api]
	I1124 09:47:03.733370 1844089 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1124 09:47:03.733954 1844089 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1124 09:47:03.733970 1844089 command_runner.go:130] > # IP address on which the stream server will listen.
	I1124 09:47:03.734597 1844089 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1124 09:47:03.734618 1844089 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1124 09:47:03.734638 1844089 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1124 09:47:03.735322 1844089 command_runner.go:130] > # stream_port = "0"
	I1124 09:47:03.735342 1844089 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1124 09:47:03.735920 1844089 command_runner.go:130] > # stream_enable_tls = false
	I1124 09:47:03.735936 1844089 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1124 09:47:03.736379 1844089 command_runner.go:130] > # stream_idle_timeout = ""
	I1124 09:47:03.736427 1844089 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1124 09:47:03.736442 1844089 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1124 09:47:03.736931 1844089 command_runner.go:130] > # stream_tls_cert = ""
	I1124 09:47:03.736947 1844089 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1124 09:47:03.736954 1844089 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1124 09:47:03.737422 1844089 command_runner.go:130] > # stream_tls_key = ""
	I1124 09:47:03.737439 1844089 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1124 09:47:03.737447 1844089 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1124 09:47:03.737466 1844089 command_runner.go:130] > # automatically pick up the changes.
	I1124 09:47:03.737919 1844089 command_runner.go:130] > # stream_tls_ca = ""
	I1124 09:47:03.737973 1844089 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.738690 1844089 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1124 09:47:03.738709 1844089 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.739334 1844089 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1124 09:47:03.739351 1844089 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1124 09:47:03.739358 1844089 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1124 09:47:03.739383 1844089 command_runner.go:130] > [crio.runtime]
	I1124 09:47:03.739395 1844089 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1124 09:47:03.739402 1844089 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1124 09:47:03.739406 1844089 command_runner.go:130] > # "nofile=1024:2048"
	I1124 09:47:03.739432 1844089 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1124 09:47:03.739736 1844089 command_runner.go:130] > # default_ulimits = [
	I1124 09:47:03.740060 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.740075 1844089 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1124 09:47:03.740677 1844089 command_runner.go:130] > # no_pivot = false
	I1124 09:47:03.740693 1844089 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1124 09:47:03.740700 1844089 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1124 09:47:03.741305 1844089 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1124 09:47:03.741322 1844089 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1124 09:47:03.741328 1844089 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1124 09:47:03.741356 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.741816 1844089 command_runner.go:130] > # conmon = ""
	I1124 09:47:03.741833 1844089 command_runner.go:130] > # Cgroup setting for conmon
	I1124 09:47:03.741841 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1124 09:47:03.742193 1844089 command_runner.go:130] > conmon_cgroup = "pod"
	I1124 09:47:03.742211 1844089 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1124 09:47:03.742237 1844089 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1124 09:47:03.742253 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.742594 1844089 command_runner.go:130] > # conmon_env = [
	I1124 09:47:03.742962 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.742977 1844089 command_runner.go:130] > # Additional environment variables to set for all the
	I1124 09:47:03.742984 1844089 command_runner.go:130] > # containers. These are overridden if set in the
	I1124 09:47:03.742990 1844089 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1124 09:47:03.743288 1844089 command_runner.go:130] > # default_env = [
	I1124 09:47:03.743607 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.743619 1844089 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1124 09:47:03.743646 1844089 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1124 09:47:03.744217 1844089 command_runner.go:130] > # selinux = false
	I1124 09:47:03.744234 1844089 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1124 09:47:03.744279 1844089 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1124 09:47:03.744293 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.744768 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.744784 1844089 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1124 09:47:03.744790 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745254 1844089 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1124 09:47:03.745273 1844089 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1124 09:47:03.745281 1844089 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1124 09:47:03.745308 1844089 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1124 09:47:03.745322 1844089 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1124 09:47:03.745328 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745934 1844089 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1124 09:47:03.745975 1844089 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1124 09:47:03.745989 1844089 command_runner.go:130] > # the cgroup blockio controller.
	I1124 09:47:03.746500 1844089 command_runner.go:130] > # blockio_config_file = ""
	I1124 09:47:03.746515 1844089 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1124 09:47:03.746541 1844089 command_runner.go:130] > # blockio parameters.
	I1124 09:47:03.747165 1844089 command_runner.go:130] > # blockio_reload = false
	I1124 09:47:03.747182 1844089 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1124 09:47:03.747187 1844089 command_runner.go:130] > # irqbalance daemon.
	I1124 09:47:03.747784 1844089 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1124 09:47:03.747803 1844089 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1124 09:47:03.747830 1844089 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1124 09:47:03.747843 1844089 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1124 09:47:03.748453 1844089 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1124 09:47:03.748471 1844089 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1124 09:47:03.748496 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.748966 1844089 command_runner.go:130] > # rdt_config_file = ""
	I1124 09:47:03.748982 1844089 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1124 09:47:03.749348 1844089 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1124 09:47:03.749364 1844089 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1124 09:47:03.749770 1844089 command_runner.go:130] > # separate_pull_cgroup = ""
	I1124 09:47:03.749788 1844089 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1124 09:47:03.749796 1844089 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1124 09:47:03.749820 1844089 command_runner.go:130] > # will be added.
	I1124 09:47:03.749833 1844089 command_runner.go:130] > # default_capabilities = [
	I1124 09:47:03.750067 1844089 command_runner.go:130] > # 	"CHOWN",
	I1124 09:47:03.750401 1844089 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1124 09:47:03.750646 1844089 command_runner.go:130] > # 	"FSETID",
	I1124 09:47:03.750659 1844089 command_runner.go:130] > # 	"FOWNER",
	I1124 09:47:03.750665 1844089 command_runner.go:130] > # 	"SETGID",
	I1124 09:47:03.750669 1844089 command_runner.go:130] > # 	"SETUID",
	I1124 09:47:03.750725 1844089 command_runner.go:130] > # 	"SETPCAP",
	I1124 09:47:03.750739 1844089 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1124 09:47:03.750745 1844089 command_runner.go:130] > # 	"KILL",
	I1124 09:47:03.750755 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.750774 1844089 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1124 09:47:03.750785 1844089 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1124 09:47:03.750991 1844089 command_runner.go:130] > # add_inheritable_capabilities = false
	I1124 09:47:03.751004 1844089 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1124 09:47:03.751023 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751034 1844089 command_runner.go:130] > default_sysctls = [
	I1124 09:47:03.751219 1844089 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1124 09:47:03.751480 1844089 command_runner.go:130] > ]
	I1124 09:47:03.751494 1844089 command_runner.go:130] > # List of devices on the host that a
	I1124 09:47:03.751501 1844089 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1124 09:47:03.751522 1844089 command_runner.go:130] > # allowed_devices = [
	I1124 09:47:03.751532 1844089 command_runner.go:130] > # 	"/dev/fuse",
	I1124 09:47:03.751536 1844089 command_runner.go:130] > # 	"/dev/net/tun",
	I1124 09:47:03.751539 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751545 1844089 command_runner.go:130] > # List of additional devices. specified as
	I1124 09:47:03.751558 1844089 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1124 09:47:03.751576 1844089 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1124 09:47:03.751614 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751625 1844089 command_runner.go:130] > # additional_devices = [
	I1124 09:47:03.751802 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751816 1844089 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1124 09:47:03.752056 1844089 command_runner.go:130] > # cdi_spec_dirs = [
	I1124 09:47:03.752288 1844089 command_runner.go:130] > # 	"/etc/cdi",
	I1124 09:47:03.752302 1844089 command_runner.go:130] > # 	"/var/run/cdi",
	I1124 09:47:03.752307 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752313 1844089 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1124 09:47:03.752348 1844089 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1124 09:47:03.752353 1844089 command_runner.go:130] > # Defaults to false.
	I1124 09:47:03.752752 1844089 command_runner.go:130] > # device_ownership_from_security_context = false
	I1124 09:47:03.752770 1844089 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1124 09:47:03.752778 1844089 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1124 09:47:03.752782 1844089 command_runner.go:130] > # hooks_dir = [
	I1124 09:47:03.752808 1844089 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1124 09:47:03.752819 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752826 1844089 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1124 09:47:03.752833 1844089 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1124 09:47:03.752842 1844089 command_runner.go:130] > # its default mounts from the following two files:
	I1124 09:47:03.752845 1844089 command_runner.go:130] > #
	I1124 09:47:03.752852 1844089 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1124 09:47:03.752858 1844089 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1124 09:47:03.752881 1844089 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1124 09:47:03.752891 1844089 command_runner.go:130] > #
	I1124 09:47:03.752897 1844089 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1124 09:47:03.752913 1844089 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1124 09:47:03.752928 1844089 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1124 09:47:03.752934 1844089 command_runner.go:130] > #      only add mounts it finds in this file.
	I1124 09:47:03.752937 1844089 command_runner.go:130] > #
	I1124 09:47:03.752941 1844089 command_runner.go:130] > # default_mounts_file = ""
	I1124 09:47:03.752946 1844089 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1124 09:47:03.752955 1844089 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1124 09:47:03.753190 1844089 command_runner.go:130] > # pids_limit = -1
	I1124 09:47:03.753207 1844089 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1124 09:47:03.753245 1844089 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1124 09:47:03.753260 1844089 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1124 09:47:03.753269 1844089 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1124 09:47:03.753278 1844089 command_runner.go:130] > # log_size_max = -1
	I1124 09:47:03.753287 1844089 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1124 09:47:03.753296 1844089 command_runner.go:130] > # log_to_journald = false
	I1124 09:47:03.753313 1844089 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1124 09:47:03.753722 1844089 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1124 09:47:03.753734 1844089 command_runner.go:130] > # Path to directory for container attach sockets.
	I1124 09:47:03.753771 1844089 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1124 09:47:03.753785 1844089 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1124 09:47:03.753789 1844089 command_runner.go:130] > # bind_mount_prefix = ""
	I1124 09:47:03.753796 1844089 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1124 09:47:03.753804 1844089 command_runner.go:130] > # read_only = false
	I1124 09:47:03.753810 1844089 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1124 09:47:03.753817 1844089 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1124 09:47:03.753824 1844089 command_runner.go:130] > # live configuration reload.
	I1124 09:47:03.753828 1844089 command_runner.go:130] > # log_level = "info"
	I1124 09:47:03.753845 1844089 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1124 09:47:03.753857 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.754025 1844089 command_runner.go:130] > # log_filter = ""
	I1124 09:47:03.754041 1844089 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754049 1844089 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1124 09:47:03.754066 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754079 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754487 1844089 command_runner.go:130] > # uid_mappings = ""
	I1124 09:47:03.754504 1844089 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754512 1844089 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1124 09:47:03.754516 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754547 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754559 1844089 command_runner.go:130] > # gid_mappings = ""
	I1124 09:47:03.754565 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1124 09:47:03.754572 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754582 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754590 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754595 1844089 command_runner.go:130] > # minimum_mappable_uid = -1
	I1124 09:47:03.754627 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1124 09:47:03.754641 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754648 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754662 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754929 1844089 command_runner.go:130] > # minimum_mappable_gid = -1
	I1124 09:47:03.754942 1844089 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1124 09:47:03.754970 1844089 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1124 09:47:03.754983 1844089 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1124 09:47:03.754989 1844089 command_runner.go:130] > # ctr_stop_timeout = 30
	I1124 09:47:03.754994 1844089 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1124 09:47:03.755006 1844089 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1124 09:47:03.755011 1844089 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1124 09:47:03.755016 1844089 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1124 09:47:03.755021 1844089 command_runner.go:130] > # drop_infra_ctr = true
	I1124 09:47:03.755048 1844089 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1124 09:47:03.755061 1844089 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1124 09:47:03.755080 1844089 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1124 09:47:03.755090 1844089 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1124 09:47:03.755098 1844089 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1124 09:47:03.755104 1844089 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1124 09:47:03.755110 1844089 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1124 09:47:03.755118 1844089 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1124 09:47:03.755122 1844089 command_runner.go:130] > # shared_cpuset = ""
	I1124 09:47:03.755135 1844089 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1124 09:47:03.755143 1844089 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1124 09:47:03.755164 1844089 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1124 09:47:03.755182 1844089 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1124 09:47:03.755369 1844089 command_runner.go:130] > # pinns_path = ""
	I1124 09:47:03.755383 1844089 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1124 09:47:03.755391 1844089 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1124 09:47:03.755617 1844089 command_runner.go:130] > # enable_criu_support = true
	I1124 09:47:03.755632 1844089 command_runner.go:130] > # Enable/disable the generation of the container,
	I1124 09:47:03.755639 1844089 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1124 09:47:03.755935 1844089 command_runner.go:130] > # enable_pod_events = false
	I1124 09:47:03.755951 1844089 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1124 09:47:03.755976 1844089 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1124 09:47:03.755988 1844089 command_runner.go:130] > # default_runtime = "crun"
	I1124 09:47:03.756007 1844089 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1124 09:47:03.756063 1844089 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1124 09:47:03.756088 1844089 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1124 09:47:03.756099 1844089 command_runner.go:130] > # creation as a file is not desired either.
	I1124 09:47:03.756108 1844089 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1124 09:47:03.756127 1844089 command_runner.go:130] > # the hostname is being managed dynamically.
	I1124 09:47:03.756133 1844089 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1124 09:47:03.756166 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.756181 1844089 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1124 09:47:03.756199 1844089 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1124 09:47:03.756211 1844089 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1124 09:47:03.756217 1844089 command_runner.go:130] > # Each entry in the table should follow the format:
	I1124 09:47:03.756220 1844089 command_runner.go:130] > #
	I1124 09:47:03.756230 1844089 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1124 09:47:03.756235 1844089 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1124 09:47:03.756244 1844089 command_runner.go:130] > # runtime_type = "oci"
	I1124 09:47:03.756248 1844089 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1124 09:47:03.756253 1844089 command_runner.go:130] > # inherit_default_runtime = false
	I1124 09:47:03.756258 1844089 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1124 09:47:03.756285 1844089 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1124 09:47:03.756297 1844089 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1124 09:47:03.756301 1844089 command_runner.go:130] > # monitor_env = []
	I1124 09:47:03.756306 1844089 command_runner.go:130] > # privileged_without_host_devices = false
	I1124 09:47:03.756313 1844089 command_runner.go:130] > # allowed_annotations = []
	I1124 09:47:03.756319 1844089 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1124 09:47:03.756330 1844089 command_runner.go:130] > # no_sync_log = false
	I1124 09:47:03.756335 1844089 command_runner.go:130] > # default_annotations = {}
	I1124 09:47:03.756339 1844089 command_runner.go:130] > # stream_websockets = false
	I1124 09:47:03.756349 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.756390 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.756402 1844089 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1124 09:47:03.756409 1844089 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1124 09:47:03.756416 1844089 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1124 09:47:03.756427 1844089 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1124 09:47:03.756448 1844089 command_runner.go:130] > #   in $PATH.
	I1124 09:47:03.756456 1844089 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1124 09:47:03.756461 1844089 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1124 09:47:03.756468 1844089 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1124 09:47:03.756477 1844089 command_runner.go:130] > #   state.
	I1124 09:47:03.756489 1844089 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1124 09:47:03.756495 1844089 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1124 09:47:03.756515 1844089 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1124 09:47:03.756528 1844089 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1124 09:47:03.756534 1844089 command_runner.go:130] > #   the values from the default runtime on load time.
	I1124 09:47:03.756542 1844089 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1124 09:47:03.756551 1844089 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1124 09:47:03.756557 1844089 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1124 09:47:03.756564 1844089 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1124 09:47:03.756571 1844089 command_runner.go:130] > #   The currently recognized values are:
	I1124 09:47:03.756579 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1124 09:47:03.756608 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1124 09:47:03.756621 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1124 09:47:03.756627 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1124 09:47:03.756635 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1124 09:47:03.756647 1844089 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1124 09:47:03.756654 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1124 09:47:03.756661 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1124 09:47:03.756671 1844089 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1124 09:47:03.756687 1844089 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1124 09:47:03.756700 1844089 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1124 09:47:03.756720 1844089 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1124 09:47:03.756731 1844089 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1124 09:47:03.756738 1844089 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1124 09:47:03.756751 1844089 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1124 09:47:03.756759 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1124 09:47:03.756769 1844089 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1124 09:47:03.756774 1844089 command_runner.go:130] > #   deprecated option "conmon".
	I1124 09:47:03.756781 1844089 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1124 09:47:03.756803 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1124 09:47:03.756820 1844089 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1124 09:47:03.756831 1844089 command_runner.go:130] > #   should be moved to the container's cgroup
	I1124 09:47:03.756843 1844089 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1124 09:47:03.756853 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1124 09:47:03.756862 1844089 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1124 09:47:03.756870 1844089 command_runner.go:130] > #   conmon-rs by using:
	I1124 09:47:03.756878 1844089 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1124 09:47:03.756886 1844089 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1124 09:47:03.756907 1844089 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1124 09:47:03.756926 1844089 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1124 09:47:03.756938 1844089 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1124 09:47:03.756945 1844089 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1124 09:47:03.756958 1844089 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1124 09:47:03.756963 1844089 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1124 09:47:03.756972 1844089 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1124 09:47:03.756984 1844089 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1124 09:47:03.756999 1844089 command_runner.go:130] > #   when a machine crash happens.
	I1124 09:47:03.757012 1844089 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1124 09:47:03.757021 1844089 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1124 09:47:03.757033 1844089 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1124 09:47:03.757038 1844089 command_runner.go:130] > #   seccomp profile for the runtime.
	I1124 09:47:03.757047 1844089 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1124 09:47:03.757058 1844089 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1124 09:47:03.757076 1844089 command_runner.go:130] > #
	I1124 09:47:03.757087 1844089 command_runner.go:130] > # Using the seccomp notifier feature:
	I1124 09:47:03.757091 1844089 command_runner.go:130] > #
	I1124 09:47:03.757115 1844089 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1124 09:47:03.757130 1844089 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1124 09:47:03.757134 1844089 command_runner.go:130] > #
	I1124 09:47:03.757141 1844089 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1124 09:47:03.757151 1844089 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1124 09:47:03.757154 1844089 command_runner.go:130] > #
	I1124 09:47:03.757165 1844089 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1124 09:47:03.757172 1844089 command_runner.go:130] > # feature.
	I1124 09:47:03.757175 1844089 command_runner.go:130] > #
	I1124 09:47:03.757195 1844089 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1124 09:47:03.757204 1844089 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1124 09:47:03.757220 1844089 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1124 09:47:03.757233 1844089 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1124 09:47:03.757239 1844089 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1124 09:47:03.757247 1844089 command_runner.go:130] > #
	I1124 09:47:03.757258 1844089 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1124 09:47:03.757268 1844089 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1124 09:47:03.757271 1844089 command_runner.go:130] > #
	I1124 09:47:03.757277 1844089 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1124 09:47:03.757283 1844089 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1124 09:47:03.757298 1844089 command_runner.go:130] > #
	I1124 09:47:03.757320 1844089 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1124 09:47:03.757333 1844089 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1124 09:47:03.757341 1844089 command_runner.go:130] > # limitation.
	I1124 09:47:03.757617 1844089 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1124 09:47:03.757630 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1124 09:47:03.757635 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.757639 1844089 command_runner.go:130] > runtime_root = "/run/crun"
	I1124 09:47:03.757643 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.757670 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.757675 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.757680 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.757690 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.757695 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.757700 1844089 command_runner.go:130] > allowed_annotations = [
	I1124 09:47:03.757954 1844089 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1124 09:47:03.757971 1844089 command_runner.go:130] > ]
	I1124 09:47:03.757978 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.757982 1844089 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1124 09:47:03.758003 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1124 09:47:03.758013 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.758018 1844089 command_runner.go:130] > runtime_root = "/run/runc"
	I1124 09:47:03.758023 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.758033 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.758037 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.758042 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.758047 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.758051 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.758456 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.758471 1844089 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1124 09:47:03.758477 1844089 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1124 09:47:03.758504 1844089 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1124 09:47:03.758514 1844089 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1124 09:47:03.758525 1844089 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1124 09:47:03.758550 1844089 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1124 09:47:03.758572 1844089 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1124 09:47:03.758585 1844089 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1124 09:47:03.758595 1844089 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1124 09:47:03.758608 1844089 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1124 09:47:03.758614 1844089 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1124 09:47:03.758621 1844089 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1124 09:47:03.758629 1844089 command_runner.go:130] > # Example:
	I1124 09:47:03.758634 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1124 09:47:03.758650 1844089 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1124 09:47:03.758663 1844089 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1124 09:47:03.758670 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1124 09:47:03.758684 1844089 command_runner.go:130] > # cpuset = "0-1"
	I1124 09:47:03.758691 1844089 command_runner.go:130] > # cpushares = "5"
	I1124 09:47:03.758695 1844089 command_runner.go:130] > # cpuquota = "1000"
	I1124 09:47:03.758700 1844089 command_runner.go:130] > # cpuperiod = "100000"
	I1124 09:47:03.758703 1844089 command_runner.go:130] > # cpulimit = "35"
	I1124 09:47:03.758714 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.758719 1844089 command_runner.go:130] > # The workload name is workload-type.
	I1124 09:47:03.758726 1844089 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1124 09:47:03.758738 1844089 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1124 09:47:03.758744 1844089 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1124 09:47:03.758763 1844089 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1124 09:47:03.758772 1844089 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1124 09:47:03.758787 1844089 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1124 09:47:03.758800 1844089 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1124 09:47:03.758805 1844089 command_runner.go:130] > # Default value is set to true
	I1124 09:47:03.758816 1844089 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1124 09:47:03.758822 1844089 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1124 09:47:03.758827 1844089 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1124 09:47:03.758837 1844089 command_runner.go:130] > # Default value is set to 'false'
	I1124 09:47:03.758841 1844089 command_runner.go:130] > # disable_hostport_mapping = false
	I1124 09:47:03.758846 1844089 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1124 09:47:03.758869 1844089 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1124 09:47:03.759115 1844089 command_runner.go:130] > # timezone = ""
	I1124 09:47:03.759131 1844089 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1124 09:47:03.759134 1844089 command_runner.go:130] > #
	I1124 09:47:03.759141 1844089 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1124 09:47:03.759163 1844089 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1124 09:47:03.759174 1844089 command_runner.go:130] > [crio.image]
	I1124 09:47:03.759180 1844089 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1124 09:47:03.759194 1844089 command_runner.go:130] > # default_transport = "docker://"
	I1124 09:47:03.759204 1844089 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1124 09:47:03.759211 1844089 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759215 1844089 command_runner.go:130] > # global_auth_file = ""
	I1124 09:47:03.759237 1844089 command_runner.go:130] > # The image used to instantiate infra containers.
	I1124 09:47:03.759259 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759457 1844089 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.759477 1844089 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1124 09:47:03.759497 1844089 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759511 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759702 1844089 command_runner.go:130] > # pause_image_auth_file = ""
	I1124 09:47:03.759716 1844089 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1124 09:47:03.759723 1844089 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1124 09:47:03.759742 1844089 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1124 09:47:03.759757 1844089 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1124 09:47:03.760047 1844089 command_runner.go:130] > # pause_command = "/pause"
	I1124 09:47:03.760064 1844089 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1124 09:47:03.760071 1844089 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1124 09:47:03.760077 1844089 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1124 09:47:03.760108 1844089 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1124 09:47:03.760115 1844089 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1124 09:47:03.760126 1844089 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1124 09:47:03.760131 1844089 command_runner.go:130] > # pinned_images = [
	I1124 09:47:03.760134 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760140 1844089 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1124 09:47:03.760146 1844089 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1124 09:47:03.760157 1844089 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1124 09:47:03.760175 1844089 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1124 09:47:03.760186 1844089 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1124 09:47:03.760191 1844089 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1124 09:47:03.760197 1844089 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1124 09:47:03.760209 1844089 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1124 09:47:03.760216 1844089 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1124 09:47:03.760225 1844089 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1124 09:47:03.760231 1844089 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1124 09:47:03.760246 1844089 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1124 09:47:03.760260 1844089 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1124 09:47:03.760282 1844089 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1124 09:47:03.760292 1844089 command_runner.go:130] > # changing them here.
	I1124 09:47:03.760298 1844089 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1124 09:47:03.760302 1844089 command_runner.go:130] > # insecure_registries = [
	I1124 09:47:03.760312 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760318 1844089 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1124 09:47:03.760329 1844089 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1124 09:47:03.760704 1844089 command_runner.go:130] > # image_volumes = "mkdir"
	I1124 09:47:03.760720 1844089 command_runner.go:130] > # Temporary directory to use for storing big files
	I1124 09:47:03.760964 1844089 command_runner.go:130] > # big_files_temporary_dir = ""
	I1124 09:47:03.760980 1844089 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1124 09:47:03.760987 1844089 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1124 09:47:03.760992 1844089 command_runner.go:130] > # auto_reload_registries = false
	I1124 09:47:03.761030 1844089 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1124 09:47:03.761047 1844089 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1124 09:47:03.761054 1844089 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1124 09:47:03.761232 1844089 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1124 09:47:03.761247 1844089 command_runner.go:130] > # The mode of short name resolution.
	I1124 09:47:03.761255 1844089 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1124 09:47:03.761263 1844089 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1124 09:47:03.761289 1844089 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1124 09:47:03.761475 1844089 command_runner.go:130] > # short_name_mode = "enforcing"
	I1124 09:47:03.761491 1844089 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1124 09:47:03.761498 1844089 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1124 09:47:03.761714 1844089 command_runner.go:130] > # oci_artifact_mount_support = true
	I1124 09:47:03.761730 1844089 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1124 09:47:03.761735 1844089 command_runner.go:130] > # CNI plugins.
	I1124 09:47:03.761738 1844089 command_runner.go:130] > [crio.network]
	I1124 09:47:03.761777 1844089 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1124 09:47:03.761790 1844089 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1124 09:47:03.761797 1844089 command_runner.go:130] > # cni_default_network = ""
	I1124 09:47:03.761810 1844089 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1124 09:47:03.761814 1844089 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1124 09:47:03.761820 1844089 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1124 09:47:03.761839 1844089 command_runner.go:130] > # plugin_dirs = [
	I1124 09:47:03.762075 1844089 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1124 09:47:03.762088 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762092 1844089 command_runner.go:130] > # List of included pod metrics.
	I1124 09:47:03.762097 1844089 command_runner.go:130] > # included_pod_metrics = [
	I1124 09:47:03.762100 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762106 1844089 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1124 09:47:03.762124 1844089 command_runner.go:130] > [crio.metrics]
	I1124 09:47:03.762136 1844089 command_runner.go:130] > # Globally enable or disable metrics support.
	I1124 09:47:03.762321 1844089 command_runner.go:130] > # enable_metrics = false
	I1124 09:47:03.762336 1844089 command_runner.go:130] > # Specify enabled metrics collectors.
	I1124 09:47:03.762342 1844089 command_runner.go:130] > # Per default all metrics are enabled.
	I1124 09:47:03.762349 1844089 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1124 09:47:03.762356 1844089 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1124 09:47:03.762386 1844089 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1124 09:47:03.762392 1844089 command_runner.go:130] > # metrics_collectors = [
	I1124 09:47:03.763119 1844089 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1124 09:47:03.763143 1844089 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1124 09:47:03.763149 1844089 command_runner.go:130] > # 	"containers_oom_total",
	I1124 09:47:03.763153 1844089 command_runner.go:130] > # 	"processes_defunct",
	I1124 09:47:03.763188 1844089 command_runner.go:130] > # 	"operations_total",
	I1124 09:47:03.763201 1844089 command_runner.go:130] > # 	"operations_latency_seconds",
	I1124 09:47:03.763207 1844089 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1124 09:47:03.763212 1844089 command_runner.go:130] > # 	"operations_errors_total",
	I1124 09:47:03.763216 1844089 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1124 09:47:03.763221 1844089 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1124 09:47:03.763226 1844089 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1124 09:47:03.763237 1844089 command_runner.go:130] > # 	"image_pulls_success_total",
	I1124 09:47:03.763260 1844089 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1124 09:47:03.763265 1844089 command_runner.go:130] > # 	"containers_oom_count_total",
	I1124 09:47:03.763270 1844089 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1124 09:47:03.763282 1844089 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1124 09:47:03.763286 1844089 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1124 09:47:03.763290 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763295 1844089 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1124 09:47:03.763300 1844089 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1124 09:47:03.763305 1844089 command_runner.go:130] > # The port on which the metrics server will listen.
	I1124 09:47:03.763313 1844089 command_runner.go:130] > # metrics_port = 9090
	I1124 09:47:03.763327 1844089 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1124 09:47:03.763337 1844089 command_runner.go:130] > # metrics_socket = ""
	I1124 09:47:03.763343 1844089 command_runner.go:130] > # The certificate for the secure metrics server.
	I1124 09:47:03.763349 1844089 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1124 09:47:03.763360 1844089 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1124 09:47:03.763365 1844089 command_runner.go:130] > # certificate on any modification event.
	I1124 09:47:03.763369 1844089 command_runner.go:130] > # metrics_cert = ""
	I1124 09:47:03.763375 1844089 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1124 09:47:03.763379 1844089 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1124 09:47:03.763384 1844089 command_runner.go:130] > # metrics_key = ""
	I1124 09:47:03.763415 1844089 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1124 09:47:03.763426 1844089 command_runner.go:130] > [crio.tracing]
	I1124 09:47:03.763442 1844089 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1124 09:47:03.763451 1844089 command_runner.go:130] > # enable_tracing = false
	I1124 09:47:03.763456 1844089 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1124 09:47:03.763461 1844089 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1124 09:47:03.763468 1844089 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1124 09:47:03.763476 1844089 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1124 09:47:03.763481 1844089 command_runner.go:130] > # CRI-O NRI configuration.
	I1124 09:47:03.763500 1844089 command_runner.go:130] > [crio.nri]
	I1124 09:47:03.763505 1844089 command_runner.go:130] > # Globally enable or disable NRI.
	I1124 09:47:03.763508 1844089 command_runner.go:130] > # enable_nri = true
	I1124 09:47:03.763524 1844089 command_runner.go:130] > # NRI socket to listen on.
	I1124 09:47:03.763535 1844089 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1124 09:47:03.763540 1844089 command_runner.go:130] > # NRI plugin directory to use.
	I1124 09:47:03.763544 1844089 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1124 09:47:03.763552 1844089 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1124 09:47:03.763560 1844089 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1124 09:47:03.763566 1844089 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1124 09:47:03.763634 1844089 command_runner.go:130] > # nri_disable_connections = false
	I1124 09:47:03.763648 1844089 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1124 09:47:03.763654 1844089 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1124 09:47:03.763669 1844089 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1124 09:47:03.763681 1844089 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1124 09:47:03.763685 1844089 command_runner.go:130] > # NRI default validator configuration.
	I1124 09:47:03.763692 1844089 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1124 09:47:03.763699 1844089 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1124 09:47:03.763703 1844089 command_runner.go:130] > # can be restricted/rejected:
	I1124 09:47:03.763707 1844089 command_runner.go:130] > # - OCI hook injection
	I1124 09:47:03.763719 1844089 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1124 09:47:03.763724 1844089 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1124 09:47:03.763730 1844089 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1124 09:47:03.763748 1844089 command_runner.go:130] > # - adjustment of linux namespaces
	I1124 09:47:03.763770 1844089 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1124 09:47:03.763778 1844089 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1124 09:47:03.763789 1844089 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1124 09:47:03.763792 1844089 command_runner.go:130] > #
	I1124 09:47:03.763797 1844089 command_runner.go:130] > # [crio.nri.default_validator]
	I1124 09:47:03.763802 1844089 command_runner.go:130] > # nri_enable_default_validator = false
	I1124 09:47:03.763807 1844089 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1124 09:47:03.763813 1844089 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1124 09:47:03.763843 1844089 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1124 09:47:03.763859 1844089 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1124 09:47:03.763864 1844089 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1124 09:47:03.763875 1844089 command_runner.go:130] > # nri_validator_required_plugins = [
	I1124 09:47:03.763879 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763885 1844089 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1124 09:47:03.763897 1844089 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1124 09:47:03.763900 1844089 command_runner.go:130] > [crio.stats]
	I1124 09:47:03.763906 1844089 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1124 09:47:03.763912 1844089 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1124 09:47:03.763930 1844089 command_runner.go:130] > # stats_collection_period = 0
	I1124 09:47:03.763938 1844089 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1124 09:47:03.763955 1844089 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1124 09:47:03.763966 1844089 command_runner.go:130] > # collection_period = 0
	I1124 09:47:03.765749 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69660512Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1124 09:47:03.765775 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696644858Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1124 09:47:03.765802 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696680353Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1124 09:47:03.765817 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696705773Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1124 09:47:03.765831 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696792248Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:03.765844 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69715048Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1124 09:47:03.765855 1844089 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1124 09:47:03.766230 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:47:03.766250 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:47:03.766285 1844089 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:03.766313 1844089 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:03.766550 1844089 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:03.766656 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:47:03.773791 1844089 command_runner.go:130] > kubeadm
	I1124 09:47:03.773812 1844089 command_runner.go:130] > kubectl
	I1124 09:47:03.773818 1844089 command_runner.go:130] > kubelet
	I1124 09:47:03.774893 1844089 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:03.774995 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:03.782726 1844089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:47:03.796280 1844089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:47:03.809559 1844089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:47:03.822485 1844089 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:03.826210 1844089 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1124 09:47:03.826334 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:03.934288 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:04.458773 1844089 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:47:04.458800 1844089 certs.go:195] generating shared ca certs ...
	I1124 09:47:04.458824 1844089 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:04.458988 1844089 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:47:04.459071 1844089 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:47:04.459080 1844089 certs.go:257] generating profile certs ...
	I1124 09:47:04.459195 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:47:04.459263 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:47:04.459319 1844089 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:47:04.459333 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1124 09:47:04.459352 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1124 09:47:04.459364 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1124 09:47:04.459374 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1124 09:47:04.459384 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1124 09:47:04.459403 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1124 09:47:04.459415 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1124 09:47:04.459426 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1124 09:47:04.459482 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:47:04.459525 1844089 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:04.459534 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:47:04.459574 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:04.459609 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:04.459638 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:04.459701 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:04.459738 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.459752 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem -> /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.459763 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.460411 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:04.483964 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:04.505086 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:04.526066 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:04.552811 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:47:04.572010 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:47:04.590830 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:04.609063 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:47:04.627178 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:04.645228 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:47:04.662875 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:47:04.680934 1844089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:04.694072 1844089 ssh_runner.go:195] Run: openssl version
	I1124 09:47:04.700410 1844089 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1124 09:47:04.700488 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:47:04.708800 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712351 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712441 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712518 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.755374 1844089 command_runner.go:130] > 3ec20f2e
	I1124 09:47:04.755866 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:04.763956 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:04.772579 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776497 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776523 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776574 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.817126 1844089 command_runner.go:130] > b5213941
	I1124 09:47:04.817555 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:04.825631 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:47:04.834323 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838391 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838437 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838503 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.879479 1844089 command_runner.go:130] > 51391683
	I1124 09:47:04.879964 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:04.888201 1844089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892298 1844089 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892323 1844089 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1124 09:47:04.892330 1844089 command_runner.go:130] > Device: 259,1	Inode: 1049847     Links: 1
	I1124 09:47:04.892337 1844089 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:04.892344 1844089 command_runner.go:130] > Access: 2025-11-24 09:42:55.781942492 +0000
	I1124 09:47:04.892349 1844089 command_runner.go:130] > Modify: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892354 1844089 command_runner.go:130] > Change: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892360 1844089 command_runner.go:130] >  Birth: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892420 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:47:04.935687 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.935791 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:47:04.977560 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.978011 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:47:05.021496 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.021984 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:47:05.064844 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.065359 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:47:05.108127 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.108275 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:47:05.149417 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.149874 1844089 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:05.149970 1844089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:05.150065 1844089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:05.178967 1844089 cri.go:89] found id: ""
	I1124 09:47:05.179068 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:05.186015 1844089 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1124 09:47:05.186039 1844089 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1124 09:47:05.186047 1844089 command_runner.go:130] > /var/lib/minikube/etcd:
	I1124 09:47:05.187003 1844089 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:47:05.187020 1844089 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:47:05.187103 1844089 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:47:05.195380 1844089 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:47:05.195777 1844089 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-373432" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.195884 1844089 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-1804834/kubeconfig needs updating (will repair): [kubeconfig missing "functional-373432" cluster setting kubeconfig missing "functional-373432" context setting]
	I1124 09:47:05.196176 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.196576 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.196729 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.197389 1844089 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 09:47:05.197410 1844089 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 09:47:05.197417 1844089 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 09:47:05.197421 1844089 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 09:47:05.197425 1844089 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 09:47:05.197478 1844089 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1124 09:47:05.197834 1844089 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:47:05.206841 1844089 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1124 09:47:05.206877 1844089 kubeadm.go:602] duration metric: took 19.851198ms to restartPrimaryControlPlane
	I1124 09:47:05.206901 1844089 kubeadm.go:403] duration metric: took 57.044926ms to StartCluster
	I1124 09:47:05.206915 1844089 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.206989 1844089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.207632 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.208100 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:05.207869 1844089 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:05.208216 1844089 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:05.208554 1844089 addons.go:70] Setting storage-provisioner=true in profile "functional-373432"
	I1124 09:47:05.208570 1844089 addons.go:239] Setting addon storage-provisioner=true in "functional-373432"
	I1124 09:47:05.208595 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.208650 1844089 addons.go:70] Setting default-storageclass=true in profile "functional-373432"
	I1124 09:47:05.208696 1844089 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-373432"
	I1124 09:47:05.208964 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.209057 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.215438 1844089 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:05.218563 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:05.247382 1844089 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:05.249311 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.249495 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.249781 1844089 addons.go:239] Setting addon default-storageclass=true in "functional-373432"
	I1124 09:47:05.249815 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.250242 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.250436 1844089 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.250452 1844089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:05.250491 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.282635 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.300501 1844089 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:05.300528 1844089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:05.300592 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.336568 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.425988 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:05.454084 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.488439 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.208671 1844089 node_ready.go:35] waiting up to 6m0s for node "functional-373432" to be "Ready" ...
	I1124 09:47:06.208714 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208746 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208771 1844089 retry.go:31] will retry after 239.578894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.208823 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208836 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208841 1844089 retry.go:31] will retry after 363.194189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208887 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.209209 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.448577 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:06.513317 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.513406 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.513430 1844089 retry.go:31] will retry after 455.413395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.572567 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.636310 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.636351 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.636371 1844089 retry.go:31] will retry after 493.81878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.709791 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.969606 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.043721 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.043767 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.043786 1844089 retry.go:31] will retry after 737.997673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.130919 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.189702 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.189740 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.189777 1844089 retry.go:31] will retry after 362.835066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.209918 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.209989 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.210325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.552843 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.609433 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.612888 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.612921 1844089 retry.go:31] will retry after 813.541227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.709061 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.709150 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.709464 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.782677 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.840776 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.844096 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.844127 1844089 retry.go:31] will retry after 1.225797654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.209825 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.209923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.210302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:08.210357 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:08.426707 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:08.489610 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:08.489648 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.489666 1844089 retry.go:31] will retry after 1.230621023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.709036 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.709146 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.709492 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.070184 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:09.132816 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.132856 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.132877 1844089 retry.go:31] will retry after 1.628151176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.209565 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.709579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.709673 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.721235 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:09.779532 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.779572 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.779591 1844089 retry.go:31] will retry after 1.535326746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.208957 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:10.709858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.709945 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.710278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:10.710329 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:10.761451 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:10.821517 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:10.825161 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.825191 1844089 retry.go:31] will retry after 2.22755575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.209753 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.209827 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.210169 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:11.315630 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:11.371370 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:11.375223 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.375258 1844089 retry.go:31] will retry after 3.052255935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.709710 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.709783 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.710113 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.208839 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.208935 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.209276 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.709439 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:13.052884 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:13.107513 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:13.110665 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.110696 1844089 retry.go:31] will retry after 2.047132712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.208986 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:13.209499 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:13.708863 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.708946 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.709225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.428018 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:14.497830 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:14.500554 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.500586 1844089 retry.go:31] will retry after 5.866686171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.708931 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.158123 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.208926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.209197 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.236504 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:15.240097 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.240134 1844089 retry.go:31] will retry after 4.86514919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.710246 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:15.710298 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:16.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.209082 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:16.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.709395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.209050 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.708987 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:18.208849 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.208918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.209189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:18.209229 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:18.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.708962 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.709278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.709232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:20.105978 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:20.163220 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.166411 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.166455 1844089 retry.go:31] will retry after 7.973407294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.209623 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.209700 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.210040 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:20.210093 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:20.367494 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:20.426176 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.426221 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.426244 1844089 retry.go:31] will retry after 7.002953248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.709786 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.710109 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.208846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.208922 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.709365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.209249 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.209597 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.709231 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.709348 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.709682 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:22.709735 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:23.209559 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.209633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.209953 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:23.709725 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.710141 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.209255 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.708973 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.709052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:25.209389 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.209467 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.209841 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:25.209903 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:25.709642 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.709719 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.209709 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.210119 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.709913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.709992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.710307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.208828 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.208902 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.209226 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.429779 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:27.489021 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:27.489061 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.489078 1844089 retry.go:31] will retry after 11.455669174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.709620 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.709697 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.710061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:27.710112 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:28.140690 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:28.207909 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:28.207963 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.207981 1844089 retry.go:31] will retry after 7.295318191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.209358 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:28.709045 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.709130 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.709479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.209267 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.209347 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.209673 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.709959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.710312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:29.710375 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:30.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.209713 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.210010 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:30.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.208899 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.709282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:32.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.209035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.209376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:32.209432 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:32.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.709024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.208858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.208927 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.209204 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.709003 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:34.208983 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.209403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:34.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:34.709379 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.709553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.709927 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.209738 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.209811 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.210108 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.503497 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:35.564590 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:35.564633 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.564653 1844089 retry.go:31] will retry after 18.757863028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.709881 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.709958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.710297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.208842 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.208909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.209196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.708965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.709288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:36.709337 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:37.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.209034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:37.708926 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.708999 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:38.709418 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:38.945958 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:39.002116 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:39.006563 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.006598 1844089 retry.go:31] will retry after 17.731618054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.209830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.210101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:39.708971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.709049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.209137 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.709279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.709607 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:40.709669 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:41.209237 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:41.709465 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.709538 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.709862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.209660 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.209740 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.210065 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.709826 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.710247 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:42.710300 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:43.208851 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.208929 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.209238 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:43.708832 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.708904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.709198 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.209292 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.709200 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.709637 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:45.209579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.209674 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.210095 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:45.210174 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:45.708846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.708926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.709257 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.709348 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.208969 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:47.709460 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:48.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:48.708913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.708985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.709311 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.209041 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.709341 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.709413 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:49.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:50.209504 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.209579 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.209916 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:50.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.709795 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.209819 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.210144 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.708840 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.708913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.709251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:52.208995 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.209079 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.209450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:52.209504 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:52.709193 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.709263 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.709579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.209019 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.709514 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.208914 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.208983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.323627 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:54.379391 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:54.382809 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.382842 1844089 retry.go:31] will retry after 21.097681162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.709482 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.709561 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.709905 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:54.709960 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:55.209834 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.209915 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.210225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:55.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.708984 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.709297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.209078 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.709184 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.709266 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.709603 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.738841 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:56.794457 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:56.797830 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:56.797870 1844089 retry.go:31] will retry after 32.033139138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:57.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.209553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.209864 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:57.209918 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:57.709718 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.709790 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.710100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.209898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.209970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.210337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.709037 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.709135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.209241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.209573 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.709578 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.709657 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.710027 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:59.710084 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:00.211215 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.211305 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.211621 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:00.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.208998 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.209081 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.708891 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.708967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:02.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.209136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.209526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:02.209599 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:02.709293 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.709375 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.709754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.209529 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.209595 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.209866 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.709708 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.709780 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.710093 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:04.209893 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.209965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.210332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:04.210385 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:04.709021 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.709445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.209464 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.209551 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.209872 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.709670 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.709745 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.710155 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.209763 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.209847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.708847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.708923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.709285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:06.709340 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:07.208931 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:07.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.709326 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.709122 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.709201 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.709539 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:08.709592 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:09.209218 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.209284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.209536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:09.709509 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.709587 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.709963 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.209602 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.209679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.209999 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.709702 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.709772 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.710032 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:10.710072 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:11.209870 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.209951 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.210285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:11.708984 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.208994 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:13.209062 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.209163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:13.209567 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:13.709210 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.709665 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.209027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.708929 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:15.209506 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.209583 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.209851 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:15.209900 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:15.481440 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:15.543475 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:15.543517 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.543536 1844089 retry.go:31] will retry after 17.984212056s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.709841 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.709917 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.710203 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.208972 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.209053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.209359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.708920 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.709254 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.209025 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.209445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.709181 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.709254 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.709571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:17.709636 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:18.209204 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.209276 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:18.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.209167 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.209240 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.709543 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.709616 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.709867 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:19.709908 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:20.209743 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.209813 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.210142 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:20.708844 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.708918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.709248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.709064 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:22.209022 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.209096 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.209401 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:22.209447 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:22.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.209381 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.709077 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.709165 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.709527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:24.209256 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.209332 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:24.209710 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:24.709523 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.709594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.709919 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.209714 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.209794 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.210176 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.709866 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.709934 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.710232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.709174 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.709252 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.709562 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:26.709621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:27.209207 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.209330 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.209681 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:27.709493 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.709901 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.209534 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.209607 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.209945 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.709691 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:28.710042 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:28.831261 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:48:28.892751 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892791 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892882 1844089 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:29.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:29.709415 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.709488 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.709832 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.209666 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.209735 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.209996 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.709837 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.709912 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.710250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:30.710310 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:31.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.209451 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:31.708995 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.709068 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.209127 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.209200 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.209540 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.709251 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.709359 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.709688 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:33.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:33.209573 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:33.528038 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:33.587216 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587268 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587355 1844089 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:33.590586 1844089 out.go:179] * Enabled addons: 
	I1124 09:48:33.594109 1844089 addons.go:530] duration metric: took 1m28.385890989s for enable addons: enabled=[]
	I1124 09:48:33.709504 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.709580 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.709909 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.209684 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.209763 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.210103 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:35.209792 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.209867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.210196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:35.210254 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:35.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.209290 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.708942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.209089 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.209182 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:37.709398 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:38.208956 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.209049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.209393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:38.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.209063 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.209144 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.209398 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.709762 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:39.709826 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:40.209362 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.209445 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.209801 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:40.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.709695 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.710016 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.209808 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.209911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.210242 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.708947 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.709450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:42.209333 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.209441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.209737 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:42.209782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:42.709513 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.709593 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.709913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.209705 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.209787 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.210136 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.709811 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.709882 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.710135 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.208840 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.208916 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.709434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:44.709491 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:45.209557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.210004 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:45.709853 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.709947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.710263 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.708971 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:47.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:47.209423 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:47.708928 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.709090 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.709181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.709512 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:49.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:49.209487 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:49.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.209043 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.208831 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.209321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.708959 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:51.709417 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:52.209136 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.209591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:52.709205 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.709536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.209062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.709175 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.709255 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.709599 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:53.709661 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:54.209206 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.209288 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:54.709557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.709679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.709998 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.209740 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.210158 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.708864 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:56.208988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.209080 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:56.209502 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:56.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.709658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.209431 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.209503 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.209825 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.709393 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.709781 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:58.209591 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.209670 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.210036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:58.210095 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:58.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.709861 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.208919 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.709435 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.709520 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.709836 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:00.209722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.210110 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:00.210156 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:00.709882 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.709966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.710301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.208997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.709044 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.709069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:02.709406 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:03.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.209309 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:03.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.709027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.709334 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.208982 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.209059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.709678 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:04.709782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:05.209548 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.209645 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.209977 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:05.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.710166 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.208981 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.209051 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.209332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.709332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:07.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.209086 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.209494 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:07.209563 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:07.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.709391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.209399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.709011 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.709085 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.209052 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.209488 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.709362 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.709442 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.709796 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:09.709855 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:10.209613 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.209690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:10.709735 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.709803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.209881 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.209958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.210304 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.709359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:12.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:12.209396 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:12.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.709325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.209056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.209385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.709008 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.709380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:14.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.209238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.209577 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:14.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:14.709397 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.709478 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.709814 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.209760 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.210102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.709949 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.710282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.209016 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.709074 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.709163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.709419 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:16.709459 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:17.209141 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.209215 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:17.709286 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.709666 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.209424 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.209499 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.209754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.709505 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.709585 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.709897 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:18.709953 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:19.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.209779 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.210117 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:19.709834 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:21.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:21.209415 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:21.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.709029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.209126 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.209204 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.209575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.709550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:23.209231 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.209670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:23.209763 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:23.709555 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.709633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.709995 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.209767 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.209841 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.709526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:25.209328 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.209411 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.209756 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:25.209816 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:25.709508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.709600 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.709938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.209774 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.209856 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.210202 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.709369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:27.209746 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.210131 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:27.210184 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:27.708830 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.708905 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.208880 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.209307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.709007 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.709327 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.209020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.709345 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.709441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.709777 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:29.709838 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:30.209612 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.209687 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.209958 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:30.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.709798 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.710129 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.208884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.209299 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.708974 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:32.208916 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.208993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:32.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:32.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.208919 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.208994 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.209330 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.709056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.709413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:34.209151 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.209227 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:34.209646 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:34.709436 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.709506 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.709774 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.209725 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.209803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.210160 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.708884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.708977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.208912 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.209323 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.709458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:36.709524 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:37.209047 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.209151 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:37.709220 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.709324 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.709631 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.209508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.209592 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.209964 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.709785 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.709869 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.710199 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:38.710257 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:39.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.208884 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.209168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:39.709057 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.709156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.709501 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.209097 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.209195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.709222 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.709295 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.709630 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:41.209317 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.209397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.209747 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:41.209802 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:41.709569 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.709654 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.709993 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.209817 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.209904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.210200 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.209070 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.709575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:43.709620 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:44.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:44.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.709401 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.709783 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.209860 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.209959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.210271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.708945 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:46.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.209515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:46.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:46.709202 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.709268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.209384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.709402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.209161 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.209414 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.709091 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.709194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.709569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:48.709627 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:49.209307 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.209384 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.209719 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:49.709527 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.709599 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.709865 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.209620 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.209699 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.210039 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.709717 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.709799 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:50.710183 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:51.208825 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.208894 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.209172 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:51.708925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.709349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.708893 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:53.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.209349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:53.209399 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:53.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.208920 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.209318 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.709373 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.709458 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.709760 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:55.209592 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.209978 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:55.210040 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:55.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.710161 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.208943 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.209271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.708876 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.708959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.208866 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.209285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.708997 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.709427 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:57.709482 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:58.209166 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.209246 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.209658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:58.709454 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.709524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.709780 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.209521 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.209598 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.209934 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.709770 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.709854 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.710168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:59.710230 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:00.208926 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.210913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:50:00.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.709842 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.710201 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.209315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.709093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:02.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.209057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:02.209542 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:02.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.709389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.209380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.708939 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.709357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.208970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.209268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.709182 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.709269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.709623 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:04.709678 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:05.209442 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.209524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.209862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:05.709612 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.709690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.710022 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.209806 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.209880 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.210219 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:07.209084 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.209187 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.209448 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:07.209497 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:07.709139 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.709341 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.209829 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.209903 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.708897 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.708964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.709236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.208927 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.209002 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.209378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.708935 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:09.709424 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:10.208903 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.209331 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:10.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.709423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.209530 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.709132 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.709202 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:11.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:12.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:12.709068 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.709177 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.709636 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.209220 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.209299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.209571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:14.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.209025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:14.209433 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:14.708909 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.708988 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.709306 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.209826 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.210152 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.708902 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.208905 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.208978 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.209278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.708874 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.709267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:16.709311 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:17.208877 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.209356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:17.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.209413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.709157 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.709238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:18.709645 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:19.209201 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.209269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.209518 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:19.709485 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.709558 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.709880 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.209636 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.209974 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.709755 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.709829 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.710090 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:20.710130 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:21.209835 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.209913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:21.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.709338 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.208900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.208981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.709058 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:23.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.209616 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:23.209677 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:23.709211 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.708953 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.209203 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.209580 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.709392 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.709705 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:25.709765 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:26.209510 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.209594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.209928 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:26.709733 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.209837 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.209926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.210235 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.709350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:28.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.209251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:28.209296 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:28.709016 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.709092 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.709432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.709708 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:30.209514 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.209603 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.209930 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:30.209989 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:30.709705 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.709782 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.710096 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.209823 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.210153 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.709337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.209065 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.209484 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:32.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:33.209221 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.209638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:33.709229 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.709309 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.709638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.209279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.209527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.709451 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.709526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.709824 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:34.709870 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:35.209712 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.210156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:35.709774 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.710101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.208924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.209266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.708961 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.709411 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:37.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.208992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.209261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:37.209303 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:37.708946 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.209345 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.709003 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.709091 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.709404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:39.209187 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.209613 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:39.209672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:39.709433 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.709508 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.709838 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.209598 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.209675 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.709773 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.709855 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.710189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.208908 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:41.709318 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:42.209001 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.209093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:42.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.709286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.709587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.209235 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.209303 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.709313 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.709652 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:43.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:44.209469 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.209542 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.209879 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:44.709684 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.709755 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.710023 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.208845 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.208942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.709723 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.709804 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.710156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:45.710211 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:46.208872 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.208948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.209249 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:46.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.208964 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.709147 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.709424 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:48.209095 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.209192 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:48.209580 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:48.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.709378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.209077 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.709414 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.709491 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.709816 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:50.209655 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.209742 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.210066 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:50.210123 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:50.709861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.709937 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.710188 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.208878 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.208952 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.209322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.708914 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.708993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.208985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.709023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.709362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:52.709420 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:53.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.209038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.209404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:53.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.708997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.709294 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.709054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.709449 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:54.709516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:55.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.209634 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.209938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:55.709754 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.709830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.710148 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.208939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:57.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.209386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:57.209445 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:57.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.709399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.209082 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.209168 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.209479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.709393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.208975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.209052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.708894 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.709244 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:59.709289 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:00.209000 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.209097 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:00.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.209068 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.209486 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:01.709394 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:02.208943 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.209065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:02.708872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.708947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.709229 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:03.208953 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.210127 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:51:03.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.708939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.709302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:04.209006 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.209406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:04.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:04.709392 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.709474 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.709835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.209403 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.209479 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.209835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.709680 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.709766 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.710028 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:06.209869 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.209955 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.210295 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:06.210355 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:06.708964 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.709046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.709408 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.708938 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.209150 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.209225 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.709219 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.709289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.709627 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:08.709719 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:09.209522 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.209981 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:09.709768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.709843 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.208987 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:11.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.209070 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:11.209465 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:11.708886 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.708972 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.709239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.208935 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.708976 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.208896 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:13.709391 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:14.208984 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:14.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.709615 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.209618 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.209698 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.210033 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.709832 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.709911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.710236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:15.710293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:16.208918 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:16.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.709009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.709328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.209046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.209357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.708924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.709185 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:18.208887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.208963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.209319 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:18.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:18.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.709038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.208917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.709241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.709591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:20.209330 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.209415 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:20.209872 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:20.709638 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.709728 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.209872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.209964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.210347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.709162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.709523 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.209023 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.209095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.209382 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.709055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:22.709477 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:23.209195 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.209283 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:23.709228 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.709557 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.709350 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.709431 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.709744 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:24.709799 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:25.209704 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.210041 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:25.709815 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.709891 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.209912 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.209990 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.210312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.708887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.708968 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.709274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:27.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:27.209444 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:27.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.209045 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.209134 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:29.209164 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:29.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:29.709432 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.709510 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.709795 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.209546 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.209973 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.709624 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.709702 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.710036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:31.209789 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.209866 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.210145 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:31.210192 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:31.708857 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.709271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.208962 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.209409 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.708881 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.708953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.709262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.709012 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:33.709407 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:34.209051 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.209156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:34.709486 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.709969 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.209768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.209850 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.210220 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.709035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:36.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.209372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:36.209421 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:36.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.709519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.208947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.209239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.709416 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:38.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.209257 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:38.209647 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:38.709163 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.709230 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.208929 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.209009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.209347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.709324 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.709397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.709728 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:40.209489 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.209557 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.209830 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:40.209876 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:40.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.709707 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.710055 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.209685 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.209762 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.210061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.709752 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.709828 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.710112 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.709372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:42.709426 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:43.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:43.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.709026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.209194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.209587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.709472 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.709546 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.709820 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:44.709861 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:45.209849 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.209939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.210268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:45.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.709006 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.709307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.209264 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.709403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:47.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.209219 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.209569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:47.209622 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:47.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.709312 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.709563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.208991 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.709134 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.709207 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.709500 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.209300 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:49.709409 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:50.209121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.209206 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:50.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.709261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.209021 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.209129 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.209441 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.709189 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.709265 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.709596 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:51.709649 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:52.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.209290 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.209551 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:52.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.209080 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.209550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.708975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.709317 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.209337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:54.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:54.708980 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.209380 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.209452 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.209779 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.709062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.709456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:56.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.209063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:56.209437 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:56.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.709867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.209908 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.209985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.210307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:58.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.209185 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:58.209485 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:58.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.209363 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.708916 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.709322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.209018 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.709370 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.709700 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:00.709758 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:01.209455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.209526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.209787 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:01.709649 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.709729 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.209841 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.209925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.210265 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.709293 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:03.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.209407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:03.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:03.709146 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.709228 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.709570 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.209286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.209544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.709581 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.709667 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.710009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.208954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.708991 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.709066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:05.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:06.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:06.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.709218 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.709559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.209217 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.209291 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.209612 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.709400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:08.209119 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.209197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:08.209621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:08.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.709292 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:10.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:11.209191 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.209268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.209610 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:11.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.709609 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.208967 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.709039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:13.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.209308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:13.209350 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:13.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.709140 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.709483 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.709549 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.709685 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.710128 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:15.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.209289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.209620 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:15.209679 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:15.709455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.709531 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.709878 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.209651 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.209725 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.209983 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.710195 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:17.709412 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:18.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.208939 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.209302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.709272 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.709356 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:19.709724 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:20.209502 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.209578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.209951 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:20.709782 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.710102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.209876 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.209953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.210310 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.708981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.709321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:22.208895 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.208966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.209252 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:22.209293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:22.709034 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.709136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.709493 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.209350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.708983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.709272 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:24.208934 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.209013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:24.209428 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:24.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.709396 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.209216 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.209546 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.709028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:26.209056 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.209172 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:26.209508 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:26.708880 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.708948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.709291 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.709023 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.709120 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.209060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.209160 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:28.709443 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:29.209162 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.209244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:29.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.709568 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.709818 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.209668 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.209750 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.210098 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.708867 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.708942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:31.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.208986 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.209328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:31.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:31.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.709072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.709157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:33.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:33.209513 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:33.709035 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.709137 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.208964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.209274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.709168 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.709244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:35.209409 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.209492 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.209807 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:35.209852 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:35.709526 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.709597 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.709869 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.209708 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.210043 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.710262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.209297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:37.709440 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:38.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.209069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:38.708972 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.709373 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.709697 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:39.709756 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:40.209475 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.209550 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.209908 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:40.709677 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.709752 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.710115 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.209759 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.210192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.708889 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.709284 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:42.208977 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:42.209516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:42.709031 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.709125 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.709477 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.209073 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.209164 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.209444 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.709305 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.709379 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.709632 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:44.709672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:45.209865 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.210034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.211000 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:45.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.709376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.209157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.209473 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.709360 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:47.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.209066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.209434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:47.209489 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:47.709149 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.709220 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.709470 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.209367 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.208882 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.208956 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.209248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.708923 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:49.709401 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:50.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.209015 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.209369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:50.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.709142 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.709429 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.209160 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.209581 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.709273 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.709351 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.709670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:51.709725 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:52.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.209549 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.209889 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:52.709740 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.709823 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.710180 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.209005 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.709060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:54.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:54.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:54.708969 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.709044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.709387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.209314 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.209382 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.209635 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.709021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:56.209074 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:56.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:56.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.709266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.709098 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.709195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.709513 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.208958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.209260 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.708954 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.709420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:58.709478 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:59.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:59.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.709259 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.709688 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.710034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:00.710088 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:01.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.209777 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.210034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:01.709784 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.709858 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.710223 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.209882 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.209960 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.210301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.708970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:03.209442 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:03.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.209135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.209531 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.709573 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.709992 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:05.209202 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.209287 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.209886 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:05.209937 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:05.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.709000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:06.209058 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:06.209140 1844089 node_ready.go:38] duration metric: took 6m0.000414768s for node "functional-373432" to be "Ready" ...
	I1124 09:53:06.212349 1844089 out.go:203] 
	W1124 09:53:06.215554 1844089 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1124 09:53:06.215587 1844089 out.go:285] * 
	W1124 09:53:06.217723 1844089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:53:06.220637 1844089 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971906805Z" level=info msg="Using the internal default seccomp profile"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971915314Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971921788Z" level=info msg="No blockio config file specified, blockio not configured"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971927203Z" level=info msg="RDT not available in the host system"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971939298Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.972776158Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.972804301Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.972821204Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.973519051Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.973546834Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.973688333Z" level=info msg="Updated default CNI network name to "
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.974254864Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.974668711Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.974738349Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030170117Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030217921Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030269803Z" level=info msg="Create NRI interface"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.03037405Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030382239Z" level=info msg="runtime interface created"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030396155Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030403704Z" level=info msg="runtime interface starting up..."
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.03041931Z" level=info msg="starting plugins..."
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030432398Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030505859Z" level=info msg="No systemd watchdog enabled"
	Nov 24 09:47:03 functional-373432 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:53:08.104030    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:08.104757    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:08.106244    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:08.106591    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:08.108015    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:53:08 up  8:35,  0 user,  load average: 0.20, 0.21, 0.55
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 09:53:05 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:06 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Nov 24 09:53:06 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:06 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:06 functional-373432 kubelet[9310]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:06 functional-373432 kubelet[9310]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:06 functional-373432 kubelet[9310]: E1124 09:53:06.524933    9310 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:06 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:06 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:07 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Nov 24 09:53:07 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:07 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:07 functional-373432 kubelet[9331]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:07 functional-373432 kubelet[9331]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:07 functional-373432 kubelet[9331]: E1124 09:53:07.269678    9331 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:07 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:07 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:07 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Nov 24 09:53:07 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:07 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:08 functional-373432 kubelet[9401]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:08 functional-373432 kubelet[9401]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:08 functional-373432 kubelet[9401]: E1124 09:53:08.023401    9401 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:08 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:08 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (368.664438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-373432 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-373432 get po -A: exit status 1 (61.59363ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-373432 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-373432 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-373432 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (329.763258ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 logs -n 25: (1.024691151s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image save kicbase/echo-server:functional-498341 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image rm kicbase/echo-server:functional-498341 --alsologtostderr                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image save --daemon kicbase/echo-server:functional-498341 --alsologtostderr                                                             │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/1806704.pem                                                                                                 │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/1806704.pem                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/18067042.pem                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /usr/share/ca-certificates/18067042.pem                                                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh sudo cat /etc/test/nested/copy/1806704/hosts                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format short --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format yaml --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ ssh            │ functional-498341 ssh pgrep buildkitd                                                                                                                     │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │                     │
	│ image          │ functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr                                                    │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                                │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                               │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                                   │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ delete         │ -p functional-498341                                                                                                                                      │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │ 24 Nov 25 09:38 UTC │
	│ start          │ -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │                     │
	│ start          │ -p functional-373432 --alsologtostderr -v=8                                                                                                               │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:46:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:46:59.387016 1844089 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:46:59.387211 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387243 1844089 out.go:374] Setting ErrFile to fd 2...
	I1124 09:46:59.387263 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387557 1844089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:46:59.388008 1844089 out.go:368] Setting JSON to false
	I1124 09:46:59.388882 1844089 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30570,"bootTime":1763947050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:46:59.388979 1844089 start.go:143] virtualization:  
	I1124 09:46:59.392592 1844089 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:46:59.396303 1844089 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:46:59.396370 1844089 notify.go:221] Checking for updates...
	I1124 09:46:59.402093 1844089 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:46:59.405033 1844089 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:46:59.407908 1844089 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:46:59.411405 1844089 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:46:59.414441 1844089 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:46:59.417923 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:46:59.418109 1844089 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:46:59.451337 1844089 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:46:59.451452 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.507906 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.498692309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.508018 1844089 docker.go:319] overlay module found
	I1124 09:46:59.511186 1844089 out.go:179] * Using the docker driver based on existing profile
	I1124 09:46:59.514098 1844089 start.go:309] selected driver: docker
	I1124 09:46:59.514123 1844089 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.514235 1844089 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:46:59.514350 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.569823 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.559648119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.570237 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:46:59.570306 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:46:59.570363 1844089 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.573590 1844089 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:46:59.576497 1844089 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:46:59.579448 1844089 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:46:59.582547 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:46:59.582648 1844089 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:46:59.602755 1844089 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:46:59.602781 1844089 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:46:59.648405 1844089 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:46:59.826473 1844089 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:46:59.826636 1844089 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:46:59.826856 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:46:59.826893 1844089 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:46:59.826927 1844089 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:46:59.826975 1844089 start.go:364] duration metric: took 25.756µs to acquireMachinesLock for "functional-373432"
	I1124 09:46:59.826990 1844089 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:46:59.826996 1844089 fix.go:54] fixHost starting: 
	I1124 09:46:59.827258 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:46:59.843979 1844089 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:46:59.844011 1844089 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:46:59.847254 1844089 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:46:59.847299 1844089 machine.go:94] provisionDockerMachine start ...
	I1124 09:46:59.847379 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:46:59.872683 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:46:59.873034 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:46:59.873051 1844089 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:46:59.992797 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.044426 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.044454 1844089 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:47:00.044547 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.104810 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.105156 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.105170 1844089 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:47:00.386378 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.386611 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.409023 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.411110 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.411442 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.411467 1844089 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:00.595280 1844089 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595319 1844089 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595392 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:47:00.595381 1844089 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595403 1844089 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 139.325µs
	I1124 09:47:00.595412 1844089 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595423 1844089 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595434 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:47:00.595442 1844089 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 62.902µs
	I1124 09:47:00.595450 1844089 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595457 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:47:00.595463 1844089 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 41.207µs
	I1124 09:47:00.595469 1844089 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:47:00.595461 1844089 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595477 1844089 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595494 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:47:00.595500 1844089 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.394µs
	I1124 09:47:00.595507 1844089 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595510 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:47:00.595517 1844089 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595524 1844089 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.5µs
	I1124 09:47:00.595532 1844089 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595282 1844089 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595546 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:47:00.595552 1844089 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 36.923µs
	I1124 09:47:00.595556 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:47:00.595558 1844089 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:47:00.595562 1844089 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 302.437µs
	I1124 09:47:00.595572 1844089 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:47:00.595568 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:47:00.595581 1844089 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 263.856µs
	I1124 09:47:00.595587 1844089 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:47:00.595593 1844089 cache.go:87] Successfully saved all images to host disk.
	I1124 09:47:00.596331 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:00.596354 1844089 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:47:00.596379 1844089 ubuntu.go:190] setting up certificates
	I1124 09:47:00.596403 1844089 provision.go:84] configureAuth start
	I1124 09:47:00.596480 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:00.614763 1844089 provision.go:143] copyHostCerts
	I1124 09:47:00.614805 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614845 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:47:00.614865 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614942 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:47:00.615049 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615076 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:47:00.615081 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615111 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:47:00.615166 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615187 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:47:00.615191 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615218 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:47:00.615273 1844089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:47:00.746073 1844089 provision.go:177] copyRemoteCerts
	I1124 09:47:00.746146 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:00.746187 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.767050 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:00.873044 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1124 09:47:00.873153 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:00.891124 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1124 09:47:00.891207 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:47:00.909032 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1124 09:47:00.909209 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:47:00.927426 1844089 provision.go:87] duration metric: took 330.992349ms to configureAuth
	I1124 09:47:00.927482 1844089 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:47:00.927686 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:00.927808 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.945584 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.945906 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.945929 1844089 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:01.279482 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:01.279511 1844089 machine.go:97] duration metric: took 1.432203745s to provisionDockerMachine
	I1124 09:47:01.279522 1844089 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:47:01.279534 1844089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:01.279608 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:01.279659 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.306223 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.413310 1844089 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:01.416834 1844089 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1124 09:47:01.416855 1844089 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1124 09:47:01.416859 1844089 command_runner.go:130] > VERSION_ID="12"
	I1124 09:47:01.416863 1844089 command_runner.go:130] > VERSION="12 (bookworm)"
	I1124 09:47:01.416868 1844089 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1124 09:47:01.416884 1844089 command_runner.go:130] > ID=debian
	I1124 09:47:01.416889 1844089 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1124 09:47:01.416894 1844089 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1124 09:47:01.416900 1844089 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1124 09:47:01.416956 1844089 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:47:01.416971 1844089 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:47:01.416982 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:47:01.417038 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:47:01.417141 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:47:01.417149 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /etc/ssl/certs/18067042.pem
	I1124 09:47:01.417225 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:47:01.417238 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> /etc/test/nested/copy/1806704/hosts
	I1124 09:47:01.417285 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:47:01.425057 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:01.443829 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:47:01.461688 1844089 start.go:296] duration metric: took 182.151565ms for postStartSetup
	I1124 09:47:01.461806 1844089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:47:01.461866 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.478949 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.582285 1844089 command_runner.go:130] > 19%
	I1124 09:47:01.582359 1844089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:47:01.587262 1844089 command_runner.go:130] > 159G
	I1124 09:47:01.587296 1844089 fix.go:56] duration metric: took 1.760298367s for fixHost
	I1124 09:47:01.587308 1844089 start.go:83] releasing machines lock for "functional-373432", held for 1.76032423s
	I1124 09:47:01.587385 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:01.605227 1844089 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:01.605290 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.605558 1844089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:01.605651 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.623897 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.640948 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.724713 1844089 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1763789673-21948", "minikube_version": "v1.37.0", "commit": "2996c7ec74d570fa8ab37e6f4f8813150d0c7473"}
	I1124 09:47:01.724863 1844089 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:01.812522 1844089 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1124 09:47:01.816014 1844089 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1124 09:47:01.816053 1844089 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1124 09:47:01.816128 1844089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:01.851397 1844089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1124 09:47:01.855673 1844089 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1124 09:47:01.855841 1844089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:01.855908 1844089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:01.863705 1844089 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:47:01.863730 1844089 start.go:496] detecting cgroup driver to use...
	I1124 09:47:01.863762 1844089 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:47:01.863809 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:01.879426 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:01.892902 1844089 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:01.892974 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:01.908995 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:01.922294 1844089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:02.052541 1844089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:02.189051 1844089 docker.go:234] disabling docker service ...
	I1124 09:47:02.189218 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:02.205065 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:02.219126 1844089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:02.329712 1844089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:02.449311 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:02.462019 1844089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:02.474641 1844089 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1124 09:47:02.476035 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:02.633334 1844089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:02.633408 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.642946 1844089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:02.643028 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.652272 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.661578 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.670499 1844089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:02.678769 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.688087 1844089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.696980 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.705967 1844089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:02.713426 1844089 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1124 09:47:02.713510 1844089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:02.720989 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:02.841969 1844089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:03.036830 1844089 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:03.036905 1844089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:03.040587 1844089 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1124 09:47:03.040611 1844089 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1124 09:47:03.040618 1844089 command_runner.go:130] > Device: 0,72	Inode: 1805        Links: 1
	I1124 09:47:03.040633 1844089 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:03.040639 1844089 command_runner.go:130] > Access: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040645 1844089 command_runner.go:130] > Modify: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040654 1844089 command_runner.go:130] > Change: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040658 1844089 command_runner.go:130] >  Birth: -
	I1124 09:47:03.041299 1844089 start.go:564] Will wait 60s for crictl version
	I1124 09:47:03.041375 1844089 ssh_runner.go:195] Run: which crictl
	I1124 09:47:03.044736 1844089 command_runner.go:130] > /usr/local/bin/crictl
	I1124 09:47:03.045405 1844089 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:47:03.072144 1844089 command_runner.go:130] > Version:  0.1.0
	I1124 09:47:03.072339 1844089 command_runner.go:130] > RuntimeName:  cri-o
	I1124 09:47:03.072489 1844089 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1124 09:47:03.072634 1844089 command_runner.go:130] > RuntimeApiVersion:  v1
	I1124 09:47:03.075078 1844089 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:47:03.075181 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.102664 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.102689 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.102697 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.102702 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.102708 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.102713 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.102717 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.102722 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.102726 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.102730 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.102734 1844089 command_runner.go:130] >      static
	I1124 09:47:03.102737 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.102741 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.102745 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.102753 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.102757 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.102763 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.102768 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.102772 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.102781 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.104732 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.133953 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.133980 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.133987 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.133991 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.133996 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.134000 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.134004 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.134008 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.134012 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.134016 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.134019 1844089 command_runner.go:130] >      static
	I1124 09:47:03.134023 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.134027 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.134031 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.134039 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.134043 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.134050 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.134056 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.134060 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.134068 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.140942 1844089 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:47:03.143873 1844089 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:47:03.160952 1844089 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:03.165052 1844089 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1124 09:47:03.165287 1844089 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:03.165490 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.325050 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.479106 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.632699 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:03.632773 1844089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:03.664623 1844089 command_runner.go:130] > {
	I1124 09:47:03.664647 1844089 command_runner.go:130] >   "images":  [
	I1124 09:47:03.664652 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664661 1844089 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1124 09:47:03.664666 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664683 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1124 09:47:03.664695 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664705 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664715 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1124 09:47:03.664722 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664727 1844089 command_runner.go:130] >       "size":  "29035622",
	I1124 09:47:03.664734 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664738 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664746 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664750 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664760 1844089 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1124 09:47:03.664768 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664775 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1124 09:47:03.664780 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664788 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664797 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1124 09:47:03.664804 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664808 1844089 command_runner.go:130] >       "size":  "74488375",
	I1124 09:47:03.664816 1844089 command_runner.go:130] >       "username":  "nonroot",
	I1124 09:47:03.664820 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664827 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664831 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664838 1844089 command_runner.go:130] >       "id":  "1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca",
	I1124 09:47:03.664845 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664851 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.24-0"
	I1124 09:47:03.664855 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664859 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664873 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:62cae8d38d7e1187ef2841ebc55bef1c5a46f21a69675fae8351f92d3a3e9bc6"
	I1124 09:47:03.664880 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664885 1844089 command_runner.go:130] >       "size":  "63341525",
	I1124 09:47:03.664892 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.664896 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.664904 1844089 command_runner.go:130] >       },
	I1124 09:47:03.664908 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664923 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664929 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664932 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664939 1844089 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1124 09:47:03.664947 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664951 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1124 09:47:03.664959 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664963 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664974 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1124 09:47:03.664987 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1124 09:47:03.664994 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664999 1844089 command_runner.go:130] >       "size":  "60857170",
	I1124 09:47:03.665002 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665009 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665013 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665016 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665020 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665024 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665028 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665039 1844089 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1124 09:47:03.665043 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665053 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1124 09:47:03.665057 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665065 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665078 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1124 09:47:03.665085 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665089 1844089 command_runner.go:130] >       "size":  "84947242",
	I1124 09:47:03.665093 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665131 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665140 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665144 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665148 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665155 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665163 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665174 1844089 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1124 09:47:03.665181 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665187 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1124 09:47:03.665195 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665198 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665206 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1124 09:47:03.665213 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665217 1844089 command_runner.go:130] >       "size":  "72167568",
	I1124 09:47:03.665221 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665229 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665232 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665236 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665244 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665247 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665254 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665262 1844089 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1124 09:47:03.665269 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665275 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1124 09:47:03.665278 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665285 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665292 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1124 09:47:03.665299 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665304 1844089 command_runner.go:130] >       "size":  "74105124",
	I1124 09:47:03.665308 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665315 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665319 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665326 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665333 1844089 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1124 09:47:03.665340 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665346 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1124 09:47:03.665353 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665357 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665369 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1124 09:47:03.665376 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665380 1844089 command_runner.go:130] >       "size":  "49819792",
	I1124 09:47:03.665384 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665388 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665396 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665401 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665405 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665412 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665415 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665426 1844089 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1124 09:47:03.665434 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665439 1844089 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.665442 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665446 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665456 1844089 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1124 09:47:03.665460 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665469 1844089 command_runner.go:130] >       "size":  "517328",
	I1124 09:47:03.665473 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665478 1844089 command_runner.go:130] >         "value":  "65535"
	I1124 09:47:03.665485 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665489 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665499 1844089 command_runner.go:130] >       "pinned":  true
	I1124 09:47:03.665506 1844089 command_runner.go:130] >     }
	I1124 09:47:03.665510 1844089 command_runner.go:130] >   ]
	I1124 09:47:03.665517 1844089 command_runner.go:130] > }
	I1124 09:47:03.667798 1844089 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:47:03.667821 1844089 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:47:03.667827 1844089 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:47:03.667924 1844089 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:03.668011 1844089 ssh_runner.go:195] Run: crio config
	I1124 09:47:03.726362 1844089 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1124 09:47:03.726390 1844089 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1124 09:47:03.726403 1844089 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1124 09:47:03.726416 1844089 command_runner.go:130] > #
	I1124 09:47:03.726461 1844089 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1124 09:47:03.726469 1844089 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1124 09:47:03.726481 1844089 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1124 09:47:03.726488 1844089 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1124 09:47:03.726498 1844089 command_runner.go:130] > # reload'.
	I1124 09:47:03.726518 1844089 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1124 09:47:03.726529 1844089 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1124 09:47:03.726536 1844089 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1124 09:47:03.726563 1844089 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1124 09:47:03.726573 1844089 command_runner.go:130] > [crio]
	I1124 09:47:03.726579 1844089 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1124 09:47:03.726585 1844089 command_runner.go:130] > # containers images, in this directory.
	I1124 09:47:03.727202 1844089 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1124 09:47:03.727221 1844089 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1124 09:47:03.727766 1844089 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1124 09:47:03.727795 1844089 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1124 09:47:03.728310 1844089 command_runner.go:130] > # imagestore = ""
	I1124 09:47:03.728328 1844089 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1124 09:47:03.728337 1844089 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1124 09:47:03.728921 1844089 command_runner.go:130] > # storage_driver = "overlay"
	I1124 09:47:03.728938 1844089 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1124 09:47:03.728946 1844089 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1124 09:47:03.729270 1844089 command_runner.go:130] > # storage_option = [
	I1124 09:47:03.729595 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.729612 1844089 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1124 09:47:03.729620 1844089 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1124 09:47:03.730268 1844089 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1124 09:47:03.730286 1844089 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1124 09:47:03.730295 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1124 09:47:03.730299 1844089 command_runner.go:130] > # always happen on a node reboot
	I1124 09:47:03.730901 1844089 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1124 09:47:03.730939 1844089 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1124 09:47:03.730951 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1124 09:47:03.730957 1844089 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1124 09:47:03.731426 1844089 command_runner.go:130] > # version_file_persist = ""
	I1124 09:47:03.731444 1844089 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1124 09:47:03.731453 1844089 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1124 09:47:03.732044 1844089 command_runner.go:130] > # internal_wipe = true
	I1124 09:47:03.732064 1844089 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1124 09:47:03.732071 1844089 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1124 09:47:03.732663 1844089 command_runner.go:130] > # internal_repair = true
	I1124 09:47:03.732708 1844089 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1124 09:47:03.732717 1844089 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1124 09:47:03.732723 1844089 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1124 09:47:03.733344 1844089 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1124 09:47:03.733360 1844089 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1124 09:47:03.733364 1844089 command_runner.go:130] > [crio.api]
	I1124 09:47:03.733370 1844089 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1124 09:47:03.733954 1844089 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1124 09:47:03.733970 1844089 command_runner.go:130] > # IP address on which the stream server will listen.
	I1124 09:47:03.734597 1844089 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1124 09:47:03.734618 1844089 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1124 09:47:03.734638 1844089 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1124 09:47:03.735322 1844089 command_runner.go:130] > # stream_port = "0"
	I1124 09:47:03.735342 1844089 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1124 09:47:03.735920 1844089 command_runner.go:130] > # stream_enable_tls = false
	I1124 09:47:03.735936 1844089 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1124 09:47:03.736379 1844089 command_runner.go:130] > # stream_idle_timeout = ""
	I1124 09:47:03.736427 1844089 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1124 09:47:03.736442 1844089 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1124 09:47:03.736931 1844089 command_runner.go:130] > # stream_tls_cert = ""
	I1124 09:47:03.736947 1844089 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1124 09:47:03.736954 1844089 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1124 09:47:03.737422 1844089 command_runner.go:130] > # stream_tls_key = ""
	I1124 09:47:03.737439 1844089 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1124 09:47:03.737447 1844089 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1124 09:47:03.737466 1844089 command_runner.go:130] > # automatically pick up the changes.
	I1124 09:47:03.737919 1844089 command_runner.go:130] > # stream_tls_ca = ""
	I1124 09:47:03.737973 1844089 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.738690 1844089 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1124 09:47:03.738709 1844089 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.739334 1844089 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1124 09:47:03.739351 1844089 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1124 09:47:03.739358 1844089 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1124 09:47:03.739383 1844089 command_runner.go:130] > [crio.runtime]
	I1124 09:47:03.739395 1844089 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1124 09:47:03.739402 1844089 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1124 09:47:03.739406 1844089 command_runner.go:130] > # "nofile=1024:2048"
	I1124 09:47:03.739432 1844089 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1124 09:47:03.739736 1844089 command_runner.go:130] > # default_ulimits = [
	I1124 09:47:03.740060 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.740075 1844089 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1124 09:47:03.740677 1844089 command_runner.go:130] > # no_pivot = false
	I1124 09:47:03.740693 1844089 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1124 09:47:03.740700 1844089 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1124 09:47:03.741305 1844089 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1124 09:47:03.741322 1844089 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1124 09:47:03.741328 1844089 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1124 09:47:03.741356 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.741816 1844089 command_runner.go:130] > # conmon = ""
	I1124 09:47:03.741833 1844089 command_runner.go:130] > # Cgroup setting for conmon
	I1124 09:47:03.741841 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1124 09:47:03.742193 1844089 command_runner.go:130] > conmon_cgroup = "pod"
	I1124 09:47:03.742211 1844089 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1124 09:47:03.742237 1844089 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1124 09:47:03.742253 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.742594 1844089 command_runner.go:130] > # conmon_env = [
	I1124 09:47:03.742962 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.742977 1844089 command_runner.go:130] > # Additional environment variables to set for all the
	I1124 09:47:03.742984 1844089 command_runner.go:130] > # containers. These are overridden if set in the
	I1124 09:47:03.742990 1844089 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1124 09:47:03.743288 1844089 command_runner.go:130] > # default_env = [
	I1124 09:47:03.743607 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.743619 1844089 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1124 09:47:03.743646 1844089 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1124 09:47:03.744217 1844089 command_runner.go:130] > # selinux = false
	I1124 09:47:03.744234 1844089 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1124 09:47:03.744279 1844089 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1124 09:47:03.744293 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.744768 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.744784 1844089 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1124 09:47:03.744790 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745254 1844089 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1124 09:47:03.745273 1844089 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1124 09:47:03.745281 1844089 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1124 09:47:03.745308 1844089 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1124 09:47:03.745322 1844089 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1124 09:47:03.745328 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745934 1844089 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1124 09:47:03.745975 1844089 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1124 09:47:03.745989 1844089 command_runner.go:130] > # the cgroup blockio controller.
	I1124 09:47:03.746500 1844089 command_runner.go:130] > # blockio_config_file = ""
	I1124 09:47:03.746515 1844089 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1124 09:47:03.746541 1844089 command_runner.go:130] > # blockio parameters.
	I1124 09:47:03.747165 1844089 command_runner.go:130] > # blockio_reload = false
	I1124 09:47:03.747182 1844089 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1124 09:47:03.747187 1844089 command_runner.go:130] > # irqbalance daemon.
	I1124 09:47:03.747784 1844089 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1124 09:47:03.747803 1844089 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1124 09:47:03.747830 1844089 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1124 09:47:03.747843 1844089 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1124 09:47:03.748453 1844089 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1124 09:47:03.748471 1844089 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1124 09:47:03.748496 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.748966 1844089 command_runner.go:130] > # rdt_config_file = ""
	I1124 09:47:03.748982 1844089 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1124 09:47:03.749348 1844089 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1124 09:47:03.749364 1844089 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1124 09:47:03.749770 1844089 command_runner.go:130] > # separate_pull_cgroup = ""
	I1124 09:47:03.749788 1844089 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1124 09:47:03.749796 1844089 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1124 09:47:03.749820 1844089 command_runner.go:130] > # will be added.
	I1124 09:47:03.749833 1844089 command_runner.go:130] > # default_capabilities = [
	I1124 09:47:03.750067 1844089 command_runner.go:130] > # 	"CHOWN",
	I1124 09:47:03.750401 1844089 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1124 09:47:03.750646 1844089 command_runner.go:130] > # 	"FSETID",
	I1124 09:47:03.750659 1844089 command_runner.go:130] > # 	"FOWNER",
	I1124 09:47:03.750665 1844089 command_runner.go:130] > # 	"SETGID",
	I1124 09:47:03.750669 1844089 command_runner.go:130] > # 	"SETUID",
	I1124 09:47:03.750725 1844089 command_runner.go:130] > # 	"SETPCAP",
	I1124 09:47:03.750739 1844089 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1124 09:47:03.750745 1844089 command_runner.go:130] > # 	"KILL",
	I1124 09:47:03.750755 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.750774 1844089 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1124 09:47:03.750785 1844089 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1124 09:47:03.750991 1844089 command_runner.go:130] > # add_inheritable_capabilities = false
	I1124 09:47:03.751004 1844089 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1124 09:47:03.751023 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751034 1844089 command_runner.go:130] > default_sysctls = [
	I1124 09:47:03.751219 1844089 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1124 09:47:03.751480 1844089 command_runner.go:130] > ]
	I1124 09:47:03.751494 1844089 command_runner.go:130] > # List of devices on the host that a
	I1124 09:47:03.751501 1844089 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1124 09:47:03.751522 1844089 command_runner.go:130] > # allowed_devices = [
	I1124 09:47:03.751532 1844089 command_runner.go:130] > # 	"/dev/fuse",
	I1124 09:47:03.751536 1844089 command_runner.go:130] > # 	"/dev/net/tun",
	I1124 09:47:03.751539 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751545 1844089 command_runner.go:130] > # List of additional devices. specified as
	I1124 09:47:03.751558 1844089 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1124 09:47:03.751576 1844089 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1124 09:47:03.751614 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751625 1844089 command_runner.go:130] > # additional_devices = [
	I1124 09:47:03.751802 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751816 1844089 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1124 09:47:03.752056 1844089 command_runner.go:130] > # cdi_spec_dirs = [
	I1124 09:47:03.752288 1844089 command_runner.go:130] > # 	"/etc/cdi",
	I1124 09:47:03.752302 1844089 command_runner.go:130] > # 	"/var/run/cdi",
	I1124 09:47:03.752307 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752313 1844089 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1124 09:47:03.752348 1844089 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1124 09:47:03.752353 1844089 command_runner.go:130] > # Defaults to false.
	I1124 09:47:03.752752 1844089 command_runner.go:130] > # device_ownership_from_security_context = false
	I1124 09:47:03.752770 1844089 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1124 09:47:03.752778 1844089 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1124 09:47:03.752782 1844089 command_runner.go:130] > # hooks_dir = [
	I1124 09:47:03.752808 1844089 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1124 09:47:03.752819 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752826 1844089 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1124 09:47:03.752833 1844089 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1124 09:47:03.752842 1844089 command_runner.go:130] > # its default mounts from the following two files:
	I1124 09:47:03.752845 1844089 command_runner.go:130] > #
	I1124 09:47:03.752852 1844089 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1124 09:47:03.752858 1844089 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1124 09:47:03.752881 1844089 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1124 09:47:03.752891 1844089 command_runner.go:130] > #
	I1124 09:47:03.752897 1844089 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1124 09:47:03.752913 1844089 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1124 09:47:03.752928 1844089 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1124 09:47:03.752934 1844089 command_runner.go:130] > #      only add mounts it finds in this file.
	I1124 09:47:03.752937 1844089 command_runner.go:130] > #
	I1124 09:47:03.752941 1844089 command_runner.go:130] > # default_mounts_file = ""
	I1124 09:47:03.752946 1844089 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1124 09:47:03.752955 1844089 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1124 09:47:03.753190 1844089 command_runner.go:130] > # pids_limit = -1
	I1124 09:47:03.753207 1844089 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1124 09:47:03.753245 1844089 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1124 09:47:03.753260 1844089 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1124 09:47:03.753269 1844089 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1124 09:47:03.753278 1844089 command_runner.go:130] > # log_size_max = -1
	I1124 09:47:03.753287 1844089 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1124 09:47:03.753296 1844089 command_runner.go:130] > # log_to_journald = false
	I1124 09:47:03.753313 1844089 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1124 09:47:03.753722 1844089 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1124 09:47:03.753734 1844089 command_runner.go:130] > # Path to directory for container attach sockets.
	I1124 09:47:03.753771 1844089 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1124 09:47:03.753785 1844089 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1124 09:47:03.753789 1844089 command_runner.go:130] > # bind_mount_prefix = ""
	I1124 09:47:03.753796 1844089 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1124 09:47:03.753804 1844089 command_runner.go:130] > # read_only = false
	I1124 09:47:03.753810 1844089 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1124 09:47:03.753817 1844089 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1124 09:47:03.753824 1844089 command_runner.go:130] > # live configuration reload.
	I1124 09:47:03.753828 1844089 command_runner.go:130] > # log_level = "info"
	I1124 09:47:03.753845 1844089 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1124 09:47:03.753857 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.754025 1844089 command_runner.go:130] > # log_filter = ""
	I1124 09:47:03.754041 1844089 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754049 1844089 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1124 09:47:03.754066 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754079 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754487 1844089 command_runner.go:130] > # uid_mappings = ""
	I1124 09:47:03.754504 1844089 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754512 1844089 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1124 09:47:03.754516 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754547 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754559 1844089 command_runner.go:130] > # gid_mappings = ""
	I1124 09:47:03.754565 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1124 09:47:03.754572 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754582 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754590 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754595 1844089 command_runner.go:130] > # minimum_mappable_uid = -1
	I1124 09:47:03.754627 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1124 09:47:03.754641 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754648 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754662 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754929 1844089 command_runner.go:130] > # minimum_mappable_gid = -1
	I1124 09:47:03.754942 1844089 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1124 09:47:03.754970 1844089 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1124 09:47:03.754983 1844089 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1124 09:47:03.754989 1844089 command_runner.go:130] > # ctr_stop_timeout = 30
	I1124 09:47:03.754994 1844089 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1124 09:47:03.755006 1844089 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1124 09:47:03.755011 1844089 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1124 09:47:03.755016 1844089 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1124 09:47:03.755021 1844089 command_runner.go:130] > # drop_infra_ctr = true
	I1124 09:47:03.755048 1844089 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1124 09:47:03.755061 1844089 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1124 09:47:03.755080 1844089 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1124 09:47:03.755090 1844089 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1124 09:47:03.755098 1844089 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1124 09:47:03.755104 1844089 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1124 09:47:03.755110 1844089 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1124 09:47:03.755118 1844089 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1124 09:47:03.755122 1844089 command_runner.go:130] > # shared_cpuset = ""
	I1124 09:47:03.755135 1844089 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1124 09:47:03.755143 1844089 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1124 09:47:03.755164 1844089 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1124 09:47:03.755182 1844089 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1124 09:47:03.755369 1844089 command_runner.go:130] > # pinns_path = ""
	I1124 09:47:03.755383 1844089 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1124 09:47:03.755391 1844089 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1124 09:47:03.755617 1844089 command_runner.go:130] > # enable_criu_support = true
	I1124 09:47:03.755632 1844089 command_runner.go:130] > # Enable/disable the generation of the container,
	I1124 09:47:03.755639 1844089 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1124 09:47:03.755935 1844089 command_runner.go:130] > # enable_pod_events = false
	I1124 09:47:03.755951 1844089 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1124 09:47:03.755976 1844089 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1124 09:47:03.755988 1844089 command_runner.go:130] > # default_runtime = "crun"
	I1124 09:47:03.756007 1844089 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1124 09:47:03.756063 1844089 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1124 09:47:03.756088 1844089 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1124 09:47:03.756099 1844089 command_runner.go:130] > # creation as a file is not desired either.
	I1124 09:47:03.756108 1844089 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1124 09:47:03.756127 1844089 command_runner.go:130] > # the hostname is being managed dynamically.
	I1124 09:47:03.756133 1844089 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1124 09:47:03.756166 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.756181 1844089 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1124 09:47:03.756199 1844089 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1124 09:47:03.756211 1844089 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1124 09:47:03.756217 1844089 command_runner.go:130] > # Each entry in the table should follow the format:
	I1124 09:47:03.756220 1844089 command_runner.go:130] > #
	I1124 09:47:03.756230 1844089 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1124 09:47:03.756235 1844089 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1124 09:47:03.756244 1844089 command_runner.go:130] > # runtime_type = "oci"
	I1124 09:47:03.756248 1844089 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1124 09:47:03.756253 1844089 command_runner.go:130] > # inherit_default_runtime = false
	I1124 09:47:03.756258 1844089 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1124 09:47:03.756285 1844089 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1124 09:47:03.756297 1844089 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1124 09:47:03.756301 1844089 command_runner.go:130] > # monitor_env = []
	I1124 09:47:03.756306 1844089 command_runner.go:130] > # privileged_without_host_devices = false
	I1124 09:47:03.756313 1844089 command_runner.go:130] > # allowed_annotations = []
	I1124 09:47:03.756319 1844089 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1124 09:47:03.756330 1844089 command_runner.go:130] > # no_sync_log = false
	I1124 09:47:03.756335 1844089 command_runner.go:130] > # default_annotations = {}
	I1124 09:47:03.756339 1844089 command_runner.go:130] > # stream_websockets = false
	I1124 09:47:03.756349 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.756390 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.756402 1844089 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1124 09:47:03.756409 1844089 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1124 09:47:03.756416 1844089 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1124 09:47:03.756427 1844089 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1124 09:47:03.756448 1844089 command_runner.go:130] > #   in $PATH.
	I1124 09:47:03.756456 1844089 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1124 09:47:03.756461 1844089 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1124 09:47:03.756468 1844089 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1124 09:47:03.756477 1844089 command_runner.go:130] > #   state.
	I1124 09:47:03.756489 1844089 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1124 09:47:03.756495 1844089 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1124 09:47:03.756515 1844089 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1124 09:47:03.756528 1844089 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1124 09:47:03.756534 1844089 command_runner.go:130] > #   the values from the default runtime on load time.
	I1124 09:47:03.756542 1844089 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1124 09:47:03.756551 1844089 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1124 09:47:03.756557 1844089 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1124 09:47:03.756564 1844089 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1124 09:47:03.756571 1844089 command_runner.go:130] > #   The currently recognized values are:
	I1124 09:47:03.756579 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1124 09:47:03.756608 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1124 09:47:03.756621 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1124 09:47:03.756627 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1124 09:47:03.756635 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1124 09:47:03.756647 1844089 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1124 09:47:03.756654 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1124 09:47:03.756661 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1124 09:47:03.756671 1844089 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1124 09:47:03.756687 1844089 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1124 09:47:03.756700 1844089 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1124 09:47:03.756720 1844089 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1124 09:47:03.756731 1844089 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1124 09:47:03.756738 1844089 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1124 09:47:03.756751 1844089 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1124 09:47:03.756759 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1124 09:47:03.756769 1844089 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1124 09:47:03.756774 1844089 command_runner.go:130] > #   deprecated option "conmon".
	I1124 09:47:03.756781 1844089 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1124 09:47:03.756803 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1124 09:47:03.756820 1844089 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1124 09:47:03.756831 1844089 command_runner.go:130] > #   should be moved to the container's cgroup
	I1124 09:47:03.756843 1844089 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1124 09:47:03.756853 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1124 09:47:03.756862 1844089 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1124 09:47:03.756870 1844089 command_runner.go:130] > #   conmon-rs by using:
	I1124 09:47:03.756878 1844089 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1124 09:47:03.756886 1844089 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1124 09:47:03.756907 1844089 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1124 09:47:03.756926 1844089 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1124 09:47:03.756938 1844089 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1124 09:47:03.756945 1844089 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1124 09:47:03.756958 1844089 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1124 09:47:03.756963 1844089 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1124 09:47:03.756972 1844089 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1124 09:47:03.756984 1844089 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1124 09:47:03.756999 1844089 command_runner.go:130] > #   when a machine crash happens.
	I1124 09:47:03.757012 1844089 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1124 09:47:03.757021 1844089 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1124 09:47:03.757033 1844089 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1124 09:47:03.757038 1844089 command_runner.go:130] > #   seccomp profile for the runtime.
	I1124 09:47:03.757047 1844089 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1124 09:47:03.757058 1844089 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1124 09:47:03.757076 1844089 command_runner.go:130] > #
	I1124 09:47:03.757087 1844089 command_runner.go:130] > # Using the seccomp notifier feature:
	I1124 09:47:03.757091 1844089 command_runner.go:130] > #
	I1124 09:47:03.757115 1844089 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1124 09:47:03.757130 1844089 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1124 09:47:03.757134 1844089 command_runner.go:130] > #
	I1124 09:47:03.757141 1844089 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1124 09:47:03.757151 1844089 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1124 09:47:03.757154 1844089 command_runner.go:130] > #
	I1124 09:47:03.757165 1844089 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1124 09:47:03.757172 1844089 command_runner.go:130] > # feature.
	I1124 09:47:03.757175 1844089 command_runner.go:130] > #
	I1124 09:47:03.757195 1844089 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1124 09:47:03.757204 1844089 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1124 09:47:03.757220 1844089 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1124 09:47:03.757233 1844089 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1124 09:47:03.757239 1844089 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1124 09:47:03.757247 1844089 command_runner.go:130] > #
	I1124 09:47:03.757258 1844089 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1124 09:47:03.757268 1844089 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1124 09:47:03.757271 1844089 command_runner.go:130] > #
	I1124 09:47:03.757277 1844089 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1124 09:47:03.757283 1844089 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1124 09:47:03.757298 1844089 command_runner.go:130] > #
	I1124 09:47:03.757320 1844089 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1124 09:47:03.757333 1844089 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1124 09:47:03.757341 1844089 command_runner.go:130] > # limitation.
	I1124 09:47:03.757617 1844089 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1124 09:47:03.757630 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1124 09:47:03.757635 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.757639 1844089 command_runner.go:130] > runtime_root = "/run/crun"
	I1124 09:47:03.757643 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.757670 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.757675 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.757680 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.757690 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.757695 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.757700 1844089 command_runner.go:130] > allowed_annotations = [
	I1124 09:47:03.757954 1844089 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1124 09:47:03.757971 1844089 command_runner.go:130] > ]
	I1124 09:47:03.757978 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.757982 1844089 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1124 09:47:03.758003 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1124 09:47:03.758013 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.758018 1844089 command_runner.go:130] > runtime_root = "/run/runc"
	I1124 09:47:03.758023 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.758033 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.758037 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.758042 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.758047 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.758051 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.758456 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.758471 1844089 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1124 09:47:03.758477 1844089 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1124 09:47:03.758504 1844089 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1124 09:47:03.758514 1844089 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1124 09:47:03.758525 1844089 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1124 09:47:03.758550 1844089 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1124 09:47:03.758572 1844089 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1124 09:47:03.758585 1844089 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1124 09:47:03.758595 1844089 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1124 09:47:03.758608 1844089 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1124 09:47:03.758614 1844089 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1124 09:47:03.758621 1844089 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1124 09:47:03.758629 1844089 command_runner.go:130] > # Example:
	I1124 09:47:03.758634 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1124 09:47:03.758650 1844089 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1124 09:47:03.758663 1844089 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1124 09:47:03.758670 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1124 09:47:03.758684 1844089 command_runner.go:130] > # cpuset = "0-1"
	I1124 09:47:03.758691 1844089 command_runner.go:130] > # cpushares = "5"
	I1124 09:47:03.758695 1844089 command_runner.go:130] > # cpuquota = "1000"
	I1124 09:47:03.758700 1844089 command_runner.go:130] > # cpuperiod = "100000"
	I1124 09:47:03.758703 1844089 command_runner.go:130] > # cpulimit = "35"
	I1124 09:47:03.758714 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.758719 1844089 command_runner.go:130] > # The workload name is workload-type.
	I1124 09:47:03.758726 1844089 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1124 09:47:03.758738 1844089 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1124 09:47:03.758744 1844089 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1124 09:47:03.758763 1844089 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1124 09:47:03.758772 1844089 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1124 09:47:03.758787 1844089 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1124 09:47:03.758800 1844089 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1124 09:47:03.758805 1844089 command_runner.go:130] > # Default value is set to true
	I1124 09:47:03.758816 1844089 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1124 09:47:03.758822 1844089 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1124 09:47:03.758827 1844089 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1124 09:47:03.758837 1844089 command_runner.go:130] > # Default value is set to 'false'
	I1124 09:47:03.758841 1844089 command_runner.go:130] > # disable_hostport_mapping = false
	I1124 09:47:03.758846 1844089 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1124 09:47:03.758869 1844089 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1124 09:47:03.759115 1844089 command_runner.go:130] > # timezone = ""
	I1124 09:47:03.759131 1844089 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1124 09:47:03.759134 1844089 command_runner.go:130] > #
	I1124 09:47:03.759141 1844089 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1124 09:47:03.759163 1844089 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1124 09:47:03.759174 1844089 command_runner.go:130] > [crio.image]
	I1124 09:47:03.759180 1844089 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1124 09:47:03.759194 1844089 command_runner.go:130] > # default_transport = "docker://"
	I1124 09:47:03.759204 1844089 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1124 09:47:03.759211 1844089 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759215 1844089 command_runner.go:130] > # global_auth_file = ""
	I1124 09:47:03.759237 1844089 command_runner.go:130] > # The image used to instantiate infra containers.
	I1124 09:47:03.759259 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759457 1844089 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.759477 1844089 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1124 09:47:03.759497 1844089 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759511 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759702 1844089 command_runner.go:130] > # pause_image_auth_file = ""
	I1124 09:47:03.759716 1844089 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1124 09:47:03.759723 1844089 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1124 09:47:03.759742 1844089 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1124 09:47:03.759757 1844089 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1124 09:47:03.760047 1844089 command_runner.go:130] > # pause_command = "/pause"
	I1124 09:47:03.760064 1844089 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1124 09:47:03.760071 1844089 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1124 09:47:03.760077 1844089 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1124 09:47:03.760108 1844089 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1124 09:47:03.760115 1844089 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1124 09:47:03.760126 1844089 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1124 09:47:03.760131 1844089 command_runner.go:130] > # pinned_images = [
	I1124 09:47:03.760134 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760140 1844089 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1124 09:47:03.760146 1844089 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1124 09:47:03.760157 1844089 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1124 09:47:03.760175 1844089 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1124 09:47:03.760186 1844089 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1124 09:47:03.760191 1844089 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1124 09:47:03.760197 1844089 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1124 09:47:03.760209 1844089 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1124 09:47:03.760216 1844089 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1124 09:47:03.760225 1844089 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1124 09:47:03.760231 1844089 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1124 09:47:03.760246 1844089 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1124 09:47:03.760260 1844089 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1124 09:47:03.760282 1844089 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1124 09:47:03.760292 1844089 command_runner.go:130] > # changing them here.
	I1124 09:47:03.760298 1844089 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1124 09:47:03.760302 1844089 command_runner.go:130] > # insecure_registries = [
	I1124 09:47:03.760312 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760318 1844089 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1124 09:47:03.760329 1844089 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1124 09:47:03.760704 1844089 command_runner.go:130] > # image_volumes = "mkdir"
	I1124 09:47:03.760720 1844089 command_runner.go:130] > # Temporary directory to use for storing big files
	I1124 09:47:03.760964 1844089 command_runner.go:130] > # big_files_temporary_dir = ""
	I1124 09:47:03.760980 1844089 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1124 09:47:03.760987 1844089 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1124 09:47:03.760992 1844089 command_runner.go:130] > # auto_reload_registries = false
	I1124 09:47:03.761030 1844089 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1124 09:47:03.761047 1844089 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1124 09:47:03.761054 1844089 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1124 09:47:03.761232 1844089 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1124 09:47:03.761247 1844089 command_runner.go:130] > # The mode of short name resolution.
	I1124 09:47:03.761255 1844089 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1124 09:47:03.761263 1844089 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1124 09:47:03.761289 1844089 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1124 09:47:03.761475 1844089 command_runner.go:130] > # short_name_mode = "enforcing"
	I1124 09:47:03.761491 1844089 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1124 09:47:03.761498 1844089 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1124 09:47:03.761714 1844089 command_runner.go:130] > # oci_artifact_mount_support = true
	I1124 09:47:03.761730 1844089 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1124 09:47:03.761735 1844089 command_runner.go:130] > # CNI plugins.
	I1124 09:47:03.761738 1844089 command_runner.go:130] > [crio.network]
	I1124 09:47:03.761777 1844089 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1124 09:47:03.761790 1844089 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1124 09:47:03.761797 1844089 command_runner.go:130] > # cni_default_network = ""
	I1124 09:47:03.761810 1844089 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1124 09:47:03.761814 1844089 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1124 09:47:03.761820 1844089 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1124 09:47:03.761839 1844089 command_runner.go:130] > # plugin_dirs = [
	I1124 09:47:03.762075 1844089 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1124 09:47:03.762088 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762092 1844089 command_runner.go:130] > # List of included pod metrics.
	I1124 09:47:03.762097 1844089 command_runner.go:130] > # included_pod_metrics = [
	I1124 09:47:03.762100 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762106 1844089 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1124 09:47:03.762124 1844089 command_runner.go:130] > [crio.metrics]
	I1124 09:47:03.762136 1844089 command_runner.go:130] > # Globally enable or disable metrics support.
	I1124 09:47:03.762321 1844089 command_runner.go:130] > # enable_metrics = false
	I1124 09:47:03.762336 1844089 command_runner.go:130] > # Specify enabled metrics collectors.
	I1124 09:47:03.762342 1844089 command_runner.go:130] > # Per default all metrics are enabled.
	I1124 09:47:03.762349 1844089 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1124 09:47:03.762356 1844089 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1124 09:47:03.762386 1844089 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1124 09:47:03.762392 1844089 command_runner.go:130] > # metrics_collectors = [
	I1124 09:47:03.763119 1844089 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1124 09:47:03.763143 1844089 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1124 09:47:03.763149 1844089 command_runner.go:130] > # 	"containers_oom_total",
	I1124 09:47:03.763153 1844089 command_runner.go:130] > # 	"processes_defunct",
	I1124 09:47:03.763188 1844089 command_runner.go:130] > # 	"operations_total",
	I1124 09:47:03.763201 1844089 command_runner.go:130] > # 	"operations_latency_seconds",
	I1124 09:47:03.763207 1844089 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1124 09:47:03.763212 1844089 command_runner.go:130] > # 	"operations_errors_total",
	I1124 09:47:03.763216 1844089 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1124 09:47:03.763221 1844089 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1124 09:47:03.763226 1844089 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1124 09:47:03.763237 1844089 command_runner.go:130] > # 	"image_pulls_success_total",
	I1124 09:47:03.763260 1844089 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1124 09:47:03.763265 1844089 command_runner.go:130] > # 	"containers_oom_count_total",
	I1124 09:47:03.763270 1844089 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1124 09:47:03.763282 1844089 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1124 09:47:03.763286 1844089 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1124 09:47:03.763290 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763295 1844089 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1124 09:47:03.763300 1844089 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1124 09:47:03.763305 1844089 command_runner.go:130] > # The port on which the metrics server will listen.
	I1124 09:47:03.763313 1844089 command_runner.go:130] > # metrics_port = 9090
	I1124 09:47:03.763327 1844089 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1124 09:47:03.763337 1844089 command_runner.go:130] > # metrics_socket = ""
	I1124 09:47:03.763343 1844089 command_runner.go:130] > # The certificate for the secure metrics server.
	I1124 09:47:03.763349 1844089 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1124 09:47:03.763360 1844089 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1124 09:47:03.763365 1844089 command_runner.go:130] > # certificate on any modification event.
	I1124 09:47:03.763369 1844089 command_runner.go:130] > # metrics_cert = ""
	I1124 09:47:03.763375 1844089 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1124 09:47:03.763379 1844089 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1124 09:47:03.763384 1844089 command_runner.go:130] > # metrics_key = ""
	I1124 09:47:03.763415 1844089 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1124 09:47:03.763426 1844089 command_runner.go:130] > [crio.tracing]
	I1124 09:47:03.763442 1844089 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1124 09:47:03.763451 1844089 command_runner.go:130] > # enable_tracing = false
	I1124 09:47:03.763456 1844089 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1124 09:47:03.763461 1844089 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1124 09:47:03.763468 1844089 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1124 09:47:03.763476 1844089 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1124 09:47:03.763481 1844089 command_runner.go:130] > # CRI-O NRI configuration.
	I1124 09:47:03.763500 1844089 command_runner.go:130] > [crio.nri]
	I1124 09:47:03.763505 1844089 command_runner.go:130] > # Globally enable or disable NRI.
	I1124 09:47:03.763508 1844089 command_runner.go:130] > # enable_nri = true
	I1124 09:47:03.763524 1844089 command_runner.go:130] > # NRI socket to listen on.
	I1124 09:47:03.763535 1844089 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1124 09:47:03.763540 1844089 command_runner.go:130] > # NRI plugin directory to use.
	I1124 09:47:03.763544 1844089 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1124 09:47:03.763552 1844089 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1124 09:47:03.763560 1844089 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1124 09:47:03.763566 1844089 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1124 09:47:03.763634 1844089 command_runner.go:130] > # nri_disable_connections = false
	I1124 09:47:03.763648 1844089 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1124 09:47:03.763654 1844089 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1124 09:47:03.763669 1844089 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1124 09:47:03.763681 1844089 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1124 09:47:03.763685 1844089 command_runner.go:130] > # NRI default validator configuration.
	I1124 09:47:03.763692 1844089 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1124 09:47:03.763699 1844089 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1124 09:47:03.763703 1844089 command_runner.go:130] > # can be restricted/rejected:
	I1124 09:47:03.763707 1844089 command_runner.go:130] > # - OCI hook injection
	I1124 09:47:03.763719 1844089 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1124 09:47:03.763724 1844089 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1124 09:47:03.763730 1844089 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1124 09:47:03.763748 1844089 command_runner.go:130] > # - adjustment of linux namespaces
	I1124 09:47:03.763770 1844089 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1124 09:47:03.763778 1844089 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1124 09:47:03.763789 1844089 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1124 09:47:03.763792 1844089 command_runner.go:130] > #
	I1124 09:47:03.763797 1844089 command_runner.go:130] > # [crio.nri.default_validator]
	I1124 09:47:03.763802 1844089 command_runner.go:130] > # nri_enable_default_validator = false
	I1124 09:47:03.763807 1844089 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1124 09:47:03.763813 1844089 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1124 09:47:03.763843 1844089 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1124 09:47:03.763859 1844089 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1124 09:47:03.763864 1844089 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1124 09:47:03.763875 1844089 command_runner.go:130] > # nri_validator_required_plugins = [
	I1124 09:47:03.763879 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763885 1844089 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1124 09:47:03.763897 1844089 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1124 09:47:03.763900 1844089 command_runner.go:130] > [crio.stats]
	I1124 09:47:03.763906 1844089 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1124 09:47:03.763912 1844089 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1124 09:47:03.763930 1844089 command_runner.go:130] > # stats_collection_period = 0
	I1124 09:47:03.763938 1844089 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1124 09:47:03.763955 1844089 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1124 09:47:03.763966 1844089 command_runner.go:130] > # collection_period = 0
	I1124 09:47:03.765749 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69660512Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1124 09:47:03.765775 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696644858Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1124 09:47:03.765802 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696680353Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1124 09:47:03.765817 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696705773Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1124 09:47:03.765831 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696792248Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:03.765844 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69715048Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1124 09:47:03.765855 1844089 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1124 09:47:03.766230 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:47:03.766250 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:47:03.766285 1844089 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:03.766313 1844089 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:03.766550 1844089 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:03.766656 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:47:03.773791 1844089 command_runner.go:130] > kubeadm
	I1124 09:47:03.773812 1844089 command_runner.go:130] > kubectl
	I1124 09:47:03.773818 1844089 command_runner.go:130] > kubelet
	I1124 09:47:03.774893 1844089 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:03.774995 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:03.782726 1844089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:47:03.796280 1844089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:47:03.809559 1844089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:47:03.822485 1844089 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:03.826210 1844089 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1124 09:47:03.826334 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:03.934288 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:04.458773 1844089 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:47:04.458800 1844089 certs.go:195] generating shared ca certs ...
	I1124 09:47:04.458824 1844089 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:04.458988 1844089 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:47:04.459071 1844089 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:47:04.459080 1844089 certs.go:257] generating profile certs ...
	I1124 09:47:04.459195 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:47:04.459263 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:47:04.459319 1844089 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:47:04.459333 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1124 09:47:04.459352 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1124 09:47:04.459364 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1124 09:47:04.459374 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1124 09:47:04.459384 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1124 09:47:04.459403 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1124 09:47:04.459415 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1124 09:47:04.459426 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1124 09:47:04.459482 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:47:04.459525 1844089 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:04.459534 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:47:04.459574 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:04.459609 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:04.459638 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:04.459701 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:04.459738 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.459752 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem -> /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.459763 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.460411 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:04.483964 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:04.505086 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:04.526066 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:04.552811 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:47:04.572010 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:47:04.590830 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:04.609063 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:47:04.627178 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:04.645228 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:47:04.662875 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:47:04.680934 1844089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:04.694072 1844089 ssh_runner.go:195] Run: openssl version
	I1124 09:47:04.700410 1844089 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1124 09:47:04.700488 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:47:04.708800 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712351 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712441 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712518 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.755374 1844089 command_runner.go:130] > 3ec20f2e
	I1124 09:47:04.755866 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:04.763956 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:04.772579 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776497 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776523 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776574 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.817126 1844089 command_runner.go:130] > b5213941
	I1124 09:47:04.817555 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:04.825631 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:47:04.834323 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838391 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838437 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838503 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.879479 1844089 command_runner.go:130] > 51391683
	I1124 09:47:04.879964 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:04.888201 1844089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892298 1844089 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892323 1844089 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1124 09:47:04.892330 1844089 command_runner.go:130] > Device: 259,1	Inode: 1049847     Links: 1
	I1124 09:47:04.892337 1844089 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:04.892344 1844089 command_runner.go:130] > Access: 2025-11-24 09:42:55.781942492 +0000
	I1124 09:47:04.892349 1844089 command_runner.go:130] > Modify: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892354 1844089 command_runner.go:130] > Change: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892360 1844089 command_runner.go:130] >  Birth: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892420 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:47:04.935687 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.935791 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:47:04.977560 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.978011 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:47:05.021496 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.021984 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:47:05.064844 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.065359 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:47:05.108127 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.108275 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:47:05.149417 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.149874 1844089 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:05.149970 1844089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:05.150065 1844089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:05.178967 1844089 cri.go:89] found id: ""
	I1124 09:47:05.179068 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:05.186015 1844089 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1124 09:47:05.186039 1844089 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1124 09:47:05.186047 1844089 command_runner.go:130] > /var/lib/minikube/etcd:
	I1124 09:47:05.187003 1844089 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:47:05.187020 1844089 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:47:05.187103 1844089 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:47:05.195380 1844089 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:47:05.195777 1844089 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-373432" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.195884 1844089 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-1804834/kubeconfig needs updating (will repair): [kubeconfig missing "functional-373432" cluster setting kubeconfig missing "functional-373432" context setting]
	I1124 09:47:05.196176 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.196576 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.196729 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.197389 1844089 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 09:47:05.197410 1844089 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 09:47:05.197417 1844089 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 09:47:05.197421 1844089 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 09:47:05.197425 1844089 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 09:47:05.197478 1844089 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1124 09:47:05.197834 1844089 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:47:05.206841 1844089 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1124 09:47:05.206877 1844089 kubeadm.go:602] duration metric: took 19.851198ms to restartPrimaryControlPlane
	I1124 09:47:05.206901 1844089 kubeadm.go:403] duration metric: took 57.044926ms to StartCluster
	I1124 09:47:05.206915 1844089 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.206989 1844089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.207632 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.208100 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:05.207869 1844089 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:05.208216 1844089 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:05.208554 1844089 addons.go:70] Setting storage-provisioner=true in profile "functional-373432"
	I1124 09:47:05.208570 1844089 addons.go:239] Setting addon storage-provisioner=true in "functional-373432"
	I1124 09:47:05.208595 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.208650 1844089 addons.go:70] Setting default-storageclass=true in profile "functional-373432"
	I1124 09:47:05.208696 1844089 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-373432"
	I1124 09:47:05.208964 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.209057 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.215438 1844089 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:05.218563 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:05.247382 1844089 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:05.249311 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.249495 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.249781 1844089 addons.go:239] Setting addon default-storageclass=true in "functional-373432"
	I1124 09:47:05.249815 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.250242 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.250436 1844089 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.250452 1844089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:05.250491 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.282635 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.300501 1844089 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:05.300528 1844089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:05.300592 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.336568 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.425988 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:05.454084 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.488439 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.208671 1844089 node_ready.go:35] waiting up to 6m0s for node "functional-373432" to be "Ready" ...
	I1124 09:47:06.208714 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208746 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208771 1844089 retry.go:31] will retry after 239.578894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.208823 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208836 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208841 1844089 retry.go:31] will retry after 363.194189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208887 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.209209 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.448577 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:06.513317 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.513406 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.513430 1844089 retry.go:31] will retry after 455.413395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.572567 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.636310 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.636351 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.636371 1844089 retry.go:31] will retry after 493.81878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.709791 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.969606 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.043721 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.043767 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.043786 1844089 retry.go:31] will retry after 737.997673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.130919 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.189702 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.189740 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.189777 1844089 retry.go:31] will retry after 362.835066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.209918 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.209989 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.210325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.552843 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.609433 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.612888 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.612921 1844089 retry.go:31] will retry after 813.541227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.709061 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.709150 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.709464 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.782677 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.840776 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.844096 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.844127 1844089 retry.go:31] will retry after 1.225797654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.209825 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.209923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.210302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:08.210357 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:08.426707 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:08.489610 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:08.489648 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.489666 1844089 retry.go:31] will retry after 1.230621023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.709036 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.709146 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.709492 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.070184 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:09.132816 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.132856 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.132877 1844089 retry.go:31] will retry after 1.628151176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.209565 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.709579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.709673 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.721235 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:09.779532 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.779572 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.779591 1844089 retry.go:31] will retry after 1.535326746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.208957 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:10.709858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.709945 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.710278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:10.710329 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:10.761451 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:10.821517 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:10.825161 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.825191 1844089 retry.go:31] will retry after 2.22755575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.209753 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.209827 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.210169 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:11.315630 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:11.371370 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:11.375223 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.375258 1844089 retry.go:31] will retry after 3.052255935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.709710 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.709783 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.710113 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.208839 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.208935 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.209276 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.709439 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:13.052884 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:13.107513 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:13.110665 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.110696 1844089 retry.go:31] will retry after 2.047132712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.208986 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:13.209499 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:13.708863 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.708946 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.709225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.428018 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:14.497830 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:14.500554 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.500586 1844089 retry.go:31] will retry after 5.866686171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.708931 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.158123 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.208926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.209197 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.236504 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:15.240097 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.240134 1844089 retry.go:31] will retry after 4.86514919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.710246 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:15.710298 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:16.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.209082 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:16.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.709395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.209050 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.708987 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:18.208849 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.208918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.209189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:18.209229 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:18.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.708962 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.709278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.709232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:20.105978 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:20.163220 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.166411 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.166455 1844089 retry.go:31] will retry after 7.973407294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.209623 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.209700 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.210040 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:20.210093 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:20.367494 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:20.426176 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.426221 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.426244 1844089 retry.go:31] will retry after 7.002953248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.709786 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.710109 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.208846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.208922 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.709365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.209249 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.209597 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.709231 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.709348 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.709682 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:22.709735 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:23.209559 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.209633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.209953 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:23.709725 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.710141 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.209255 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.708973 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.709052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:25.209389 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.209467 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.209841 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:25.209903 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:25.709642 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.709719 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.209709 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.210119 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.709913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.709992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.710307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.208828 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.208902 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.209226 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.429779 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:27.489021 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:27.489061 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.489078 1844089 retry.go:31] will retry after 11.455669174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.709620 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.709697 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.710061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:27.710112 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:28.140690 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:28.207909 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:28.207963 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.207981 1844089 retry.go:31] will retry after 7.295318191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.209358 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:28.709045 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.709130 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.709479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.209267 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.209347 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.209673 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.709959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.710312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:29.710375 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:30.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.209713 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.210010 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:30.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.208899 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.709282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:32.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.209035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.209376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:32.209432 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:32.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.709024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.208858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.208927 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.209204 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.709003 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:34.208983 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.209403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:34.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:34.709379 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.709553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.709927 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.209738 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.209811 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.210108 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.503497 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:35.564590 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:35.564633 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.564653 1844089 retry.go:31] will retry after 18.757863028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.709881 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.709958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.710297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.208842 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.208909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.209196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.708965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.709288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:36.709337 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:37.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.209034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:37.708926 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.708999 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:38.709418 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:38.945958 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:39.002116 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:39.006563 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.006598 1844089 retry.go:31] will retry after 17.731618054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.209830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.210101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:39.708971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.709049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.209137 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.709279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.709607 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:40.709669 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:41.209237 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:41.709465 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.709538 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.709862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.209660 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.209740 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.210065 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.709826 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.710247 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:42.710300 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:43.208851 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.208929 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.209238 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:43.708832 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.708904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.709198 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.209292 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.709200 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.709637 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:45.209579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.209674 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.210095 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:45.210174 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:45.708846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.708926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.709257 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.709348 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.208969 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:47.709460 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:48.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:48.708913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.708985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.709311 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.209041 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.709341 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.709413 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:49.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:50.209504 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.209579 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.209916 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:50.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.709795 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.209819 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.210144 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.708840 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.708913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.709251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:52.208995 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.209079 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.209450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:52.209504 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:52.709193 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.709263 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.709579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.209019 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.709514 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.208914 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.208983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.323627 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:54.379391 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:54.382809 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.382842 1844089 retry.go:31] will retry after 21.097681162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.709482 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.709561 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.709905 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:54.709960 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:55.209834 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.209915 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.210225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:55.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.708984 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.709297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.209078 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.709184 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.709266 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.709603 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.738841 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:56.794457 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:56.797830 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:56.797870 1844089 retry.go:31] will retry after 32.033139138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:57.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.209553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.209864 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:57.209918 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:57.709718 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.709790 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.710100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.209898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.209970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.210337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.709037 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.709135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.209241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.209573 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.709578 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.709657 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.710027 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:59.710084 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:00.211215 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.211305 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.211621 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:00.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.208998 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.209081 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.708891 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.708967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:02.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.209136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.209526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:02.209599 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:02.709293 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.709375 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.709754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.209529 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.209595 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.209866 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.709708 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.709780 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.710093 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:04.209893 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.209965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.210332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:04.210385 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:04.709021 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.709445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.209464 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.209551 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.209872 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.709670 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.709745 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.710155 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.209763 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.209847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.708847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.708923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.709285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:06.709340 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:07.208931 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:07.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.709326 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.709122 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.709201 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.709539 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:08.709592 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:09.209218 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.209284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.209536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:09.709509 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.709587 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.709963 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.209602 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.209679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.209999 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.709702 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.709772 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.710032 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:10.710072 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:11.209870 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.209951 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.210285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:11.708984 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.208994 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:13.209062 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.209163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:13.209567 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:13.709210 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.709665 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.209027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.708929 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:15.209506 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.209583 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.209851 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:15.209900 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:15.481440 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:15.543475 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:15.543517 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.543536 1844089 retry.go:31] will retry after 17.984212056s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.709841 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.709917 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.710203 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.208972 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.209053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.209359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.708920 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.709254 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.209025 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.209445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.709181 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.709254 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.709571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:17.709636 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:18.209204 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.209276 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:18.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.209167 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.209240 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.709543 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.709616 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.709867 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:19.709908 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:20.209743 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.209813 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.210142 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:20.708844 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.708918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.709248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.709064 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:22.209022 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.209096 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.209401 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:22.209447 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:22.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.209381 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.709077 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.709165 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.709527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:24.209256 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.209332 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:24.209710 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:24.709523 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.709594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.709919 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.209714 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.209794 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.210176 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.709866 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.709934 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.710232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.709174 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.709252 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.709562 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:26.709621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:27.209207 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.209330 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.209681 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:27.709493 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.709901 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.209534 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.209607 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.209945 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.709691 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:28.710042 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:28.831261 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:48:28.892751 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892791 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892882 1844089 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:29.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:29.709415 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.709488 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.709832 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.209666 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.209735 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.209996 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.709837 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.709912 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.710250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:30.710310 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:31.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.209451 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:31.708995 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.709068 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.209127 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.209200 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.209540 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.709251 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.709359 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.709688 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:33.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:33.209573 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:33.528038 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:33.587216 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587268 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587355 1844089 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:33.590586 1844089 out.go:179] * Enabled addons: 
	I1124 09:48:33.594109 1844089 addons.go:530] duration metric: took 1m28.385890989s for enable addons: enabled=[]
	I1124 09:48:33.709504 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.709580 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.709909 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.209684 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.209763 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.210103 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:35.209792 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.209867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.210196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:35.210254 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:35.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.209290 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.708942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.209089 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.209182 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:37.709398 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:38.208956 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.209049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.209393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:38.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.209063 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.209144 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.209398 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.709762 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:39.709826 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:40.209362 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.209445 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.209801 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:40.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.709695 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.710016 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.209808 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.209911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.210242 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.708947 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.709450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:42.209333 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.209441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.209737 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:42.209782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:42.709513 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.709593 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.709913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.209705 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.209787 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.210136 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.709811 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.709882 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.710135 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.208840 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.208916 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.709434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:44.709491 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:45.209557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.210004 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:45.709853 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.709947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.710263 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.708971 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:47.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:47.209423 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:47.708928 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.709090 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.709181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.709512 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:49.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:49.209487 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:49.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.209043 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.208831 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.209321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.708959 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:51.709417 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:52.209136 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.209591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:52.709205 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.709536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.209062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.709175 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.709255 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.709599 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:53.709661 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:54.209206 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.209288 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:54.709557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.709679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.709998 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.209740 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.210158 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.708864 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:56.208988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.209080 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:56.209502 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:56.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.709658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.209431 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.209503 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.209825 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.709393 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.709781 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:58.209591 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.209670 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.210036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:58.210095 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:58.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.709861 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.208919 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.709435 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.709520 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.709836 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:00.209722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.210110 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:00.210156 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:00.709882 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.709966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.710301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.208997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.709044 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.709069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:02.709406 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:03.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.209309 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:03.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.709027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.709334 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.208982 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.209059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.709678 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:04.709782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:05.209548 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.209645 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.209977 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:05.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.710166 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.208981 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.209051 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.209332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.709332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:07.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.209086 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.209494 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:07.209563 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:07.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.709391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.209399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.709011 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.709085 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.209052 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.209488 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.709362 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.709442 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.709796 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:09.709855 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:10.209613 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.209690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:10.709735 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.709803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.209881 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.209958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.210304 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.709359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:12.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:12.209396 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:12.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.709325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.209056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.209385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.709008 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.709380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:14.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.209238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.209577 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:14.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:14.709397 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.709478 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.709814 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.209760 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.210102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.709949 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.710282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.209016 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.709074 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.709163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.709419 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:16.709459 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:17.209141 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.209215 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:17.709286 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.709666 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.209424 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.209499 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.209754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.709505 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.709585 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.709897 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:18.709953 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:19.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.209779 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.210117 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:19.709834 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:21.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:21.209415 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:21.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.709029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.209126 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.209204 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.209575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.709550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:23.209231 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.209670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:23.209763 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:23.709555 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.709633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.709995 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.209767 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.209841 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.709526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:25.209328 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.209411 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.209756 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:25.209816 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:25.709508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.709600 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.709938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.209774 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.209856 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.210202 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.709369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:27.209746 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.210131 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:27.210184 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:27.708830 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.708905 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.208880 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.209307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.709007 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.709327 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.209020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.709345 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.709441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.709777 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:29.709838 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:30.209612 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.209687 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.209958 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:30.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.709798 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.710129 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.208884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.209299 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.708974 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:32.208916 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.208993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:32.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:32.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.208919 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.208994 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.209330 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.709056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.709413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:34.209151 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.209227 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:34.209646 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:34.709436 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.709506 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.709774 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.209725 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.209803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.210160 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.708884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.708977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.208912 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.209323 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.709458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:36.709524 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:37.209047 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.209151 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:37.709220 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.709324 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.709631 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.209508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.209592 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.209964 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.709785 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.709869 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.710199 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:38.710257 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:39.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.208884 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.209168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:39.709057 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.709156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.709501 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.209097 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.209195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.709222 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.709295 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.709630 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:41.209317 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.209397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.209747 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:41.209802 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:41.709569 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.709654 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.709993 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.209817 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.209904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.210200 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.209070 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.709575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:43.709620 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:44.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:44.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.709401 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.709783 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.209860 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.209959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.210271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.708945 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:46.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.209515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:46.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:46.709202 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.709268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.209384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.709402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.209161 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.209414 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.709091 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.709194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.709569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:48.709627 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:49.209307 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.209384 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.209719 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:49.709527 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.709599 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.709865 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.209620 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.209699 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.210039 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.709717 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.709799 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:50.710183 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:51.208825 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.208894 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.209172 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:51.708925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.709349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.708893 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:53.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.209349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:53.209399 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:53.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.208920 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.209318 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.709373 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.709458 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.709760 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:55.209592 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.209978 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:55.210040 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:55.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.710161 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.208943 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.209271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.708876 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.708959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.208866 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.209285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.708997 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.709427 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:57.709482 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:58.209166 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.209246 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.209658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:58.709454 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.709524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.709780 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.209521 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.209598 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.209934 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.709770 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.709854 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.710168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:59.710230 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:00.208926 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.210913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:50:00.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.709842 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.710201 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.209315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.709093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:02.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.209057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:02.209542 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:02.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.709389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.209380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.708939 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.709357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.208970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.209268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.709182 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.709269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.709623 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:04.709678 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:05.209442 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.209524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.209862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:05.709612 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.709690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.710022 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.209806 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.209880 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.210219 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:07.209084 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.209187 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.209448 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:07.209497 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:07.709139 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.709341 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.209829 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.209903 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.708897 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.708964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.709236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.208927 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.209002 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.209378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.708935 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:09.709424 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:10.208903 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.209331 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:10.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.709423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.209530 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.709132 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.709202 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:11.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:12.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:12.709068 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.709177 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.709636 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.209220 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.209299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.209571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:14.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.209025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:14.209433 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:14.708909 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.708988 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.709306 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.209826 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.210152 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.708902 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.208905 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.208978 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.209278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.708874 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.709267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:16.709311 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:17.208877 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.209356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:17.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.209413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.709157 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.709238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:18.709645 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:19.209201 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.209269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.209518 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:19.709485 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.709558 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.709880 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.209636 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.209974 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.709755 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.709829 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.710090 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:20.710130 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:21.209835 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.209913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:21.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.709338 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.208900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.208981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.709058 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:23.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.209616 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:23.209677 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:23.709211 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.708953 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.209203 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.209580 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.709392 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.709705 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:25.709765 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:26.209510 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.209594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.209928 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:26.709733 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.209837 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.209926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.210235 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.709350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:28.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.209251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:28.209296 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:28.709016 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.709092 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.709432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.709708 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:30.209514 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.209603 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.209930 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:30.209989 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:30.709705 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.709782 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.710096 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.209823 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.210153 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.709337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.209065 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.209484 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:32.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:33.209221 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.209638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:33.709229 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.709309 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.709638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.209279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.209527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.709451 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.709526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.709824 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:34.709870 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:35.209712 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.210156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:35.709774 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.710101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.208924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.209266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.708961 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.709411 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:37.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.208992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.209261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:37.209303 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:37.708946 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.209345 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.709003 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.709091 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.709404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:39.209187 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.209613 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:39.209672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:39.709433 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.709508 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.709838 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.209598 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.209675 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.709773 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.709855 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.710189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.208908 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:41.709318 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:42.209001 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.209093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:42.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.709286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.709587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.209235 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.209303 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.709313 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.709652 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:43.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:44.209469 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.209542 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.209879 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:44.709684 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.709755 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.710023 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.208845 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.208942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.709723 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.709804 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.710156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:45.710211 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:46.208872 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.208948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.209249 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:46.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.208964 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.709147 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.709424 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:48.209095 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.209192 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:48.209580 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:48.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.709378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.209077 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.709414 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.709491 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.709816 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:50.209655 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.209742 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.210066 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:50.210123 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:50.709861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.709937 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.710188 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.208878 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.208952 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.209322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.708914 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.708993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.208985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.709023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.709362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:52.709420 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:53.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.209038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.209404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:53.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.708997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.709294 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.709054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.709449 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:54.709516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:55.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.209634 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.209938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:55.709754 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.709830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.710148 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.208939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:57.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.209386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:57.209445 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:57.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.709399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.209082 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.209168 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.209479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.709393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.208975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.209052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.708894 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.709244 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:59.709289 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:00.209000 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.209097 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:00.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.209068 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.209486 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:01.709394 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:02.208943 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.209065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:02.708872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.708947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.709229 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:03.208953 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.210127 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:51:03.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.708939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.709302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:04.209006 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.209406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:04.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:04.709392 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.709474 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.709835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.209403 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.209479 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.209835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.709680 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.709766 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.710028 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:06.209869 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.209955 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.210295 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:06.210355 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:06.708964 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.709046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.709408 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.708938 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.209150 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.209225 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.709219 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.709289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.709627 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:08.709719 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:09.209522 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.209981 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:09.709768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.709843 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.208987 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:11.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.209070 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:11.209465 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:11.708886 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.708972 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.709239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.208935 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.708976 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.208896 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:13.709391 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:14.208984 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:14.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.709615 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.209618 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.209698 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.210033 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.709832 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.709911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.710236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:15.710293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:16.208918 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:16.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.709009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.709328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.209046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.209357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.708924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.709185 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:18.208887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.208963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.209319 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:18.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:18.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.709038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.208917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.709241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.709591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:20.209330 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.209415 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:20.209872 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:20.709638 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.709728 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.209872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.209964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.210347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.709162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.709523 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.209023 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.209095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.209382 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.709055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:22.709477 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:23.209195 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.209283 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:23.709228 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.709557 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.709350 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.709431 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.709744 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:24.709799 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:25.209704 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.210041 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:25.709815 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.709891 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.209912 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.209990 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.210312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.708887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.708968 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.709274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:27.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:27.209444 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:27.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.209045 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.209134 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:29.209164 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:29.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:29.709432 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.709510 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.709795 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.209546 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.209973 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.709624 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.709702 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.710036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:31.209789 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.209866 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.210145 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:31.210192 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:31.708857 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.709271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.208962 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.209409 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.708881 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.708953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.709262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.709012 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:33.709407 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:34.209051 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.209156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:34.709486 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.709969 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.209768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.209850 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.210220 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.709035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:36.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.209372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:36.209421 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:36.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.709519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.208947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.209239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.709416 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:38.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.209257 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:38.209647 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:38.709163 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.709230 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.208929 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.209009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.209347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.709324 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.709397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.709728 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:40.209489 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.209557 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.209830 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:40.209876 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:40.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.709707 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.710055 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.209685 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.209762 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.210061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.709752 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.709828 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.710112 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.709372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:42.709426 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:43.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:43.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.709026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.209194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.209587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.709472 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.709546 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.709820 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:44.709861 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:45.209849 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.209939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.210268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:45.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.709006 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.709307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.209264 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.709403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:47.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.209219 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.209569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:47.209622 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:47.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.709312 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.709563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.208991 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.709134 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.709207 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.709500 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.209300 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:49.709409 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:50.209121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.209206 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:50.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.709261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.209021 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.209129 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.209441 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.709189 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.709265 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.709596 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:51.709649 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:52.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.209290 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.209551 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:52.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.209080 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.209550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.708975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.709317 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.209337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:54.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:54.708980 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.209380 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.209452 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.209779 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.709062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.709456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:56.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.209063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:56.209437 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:56.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.709867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.209908 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.209985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.210307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:58.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.209185 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:58.209485 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:58.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.209363 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.708916 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.709322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.209018 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.709370 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.709700 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:00.709758 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:01.209455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.209526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.209787 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:01.709649 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.709729 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.209841 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.209925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.210265 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.709293 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:03.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.209407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:03.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:03.709146 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.709228 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.709570 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.209286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.209544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.709581 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.709667 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.710009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.208954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.708991 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.709066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:05.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:06.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:06.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.709218 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.709559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.209217 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.209291 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.209612 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.709400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:08.209119 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.209197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:08.209621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:08.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.709292 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:10.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:11.209191 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.209268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.209610 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:11.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.709609 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.208967 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.709039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:13.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.209308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:13.209350 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:13.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.709140 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.709483 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.709549 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.709685 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.710128 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:15.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.209289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.209620 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:15.209679 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:15.709455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.709531 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.709878 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.209651 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.209725 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.209983 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.710195 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:17.709412 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:18.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.208939 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.209302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.709272 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.709356 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:19.709724 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:20.209502 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.209578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.209951 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:20.709782 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.710102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.209876 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.209953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.210310 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.708981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.709321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:22.208895 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.208966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.209252 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:22.209293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:22.709034 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.709136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.709493 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.209350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.708983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.709272 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:24.208934 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.209013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:24.209428 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:24.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.709396 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.209216 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.209546 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.709028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:26.209056 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.209172 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:26.209508 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:26.708880 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.708948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.709291 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.709023 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.709120 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.209060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.209160 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:28.709443 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:29.209162 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.209244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:29.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.709568 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.709818 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.209668 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.209750 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.210098 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.708867 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.708942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:31.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.208986 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.209328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:31.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:31.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.709072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.709157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:33.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:33.209513 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:33.709035 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.709137 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.208964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.209274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.709168 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.709244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:35.209409 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.209492 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.209807 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:35.209852 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:35.709526 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.709597 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.709869 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.209708 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.210043 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.710262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.209297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:37.709440 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:38.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.209069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:38.708972 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.709373 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.709697 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:39.709756 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:40.209475 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.209550 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.209908 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:40.709677 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.709752 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.710115 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.209759 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.210192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.708889 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.709284 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:42.208977 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:42.209516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:42.709031 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.709125 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.709477 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.209073 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.209164 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.209444 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.709305 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.709379 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.709632 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:44.709672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:45.209865 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.210034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.211000 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:45.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.709376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.209157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.209473 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.709360 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:47.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.209066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.209434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:47.209489 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:47.709149 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.709220 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.709470 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.209367 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.208882 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.208956 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.209248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.708923 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:49.709401 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:50.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.209015 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.209369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:50.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.709142 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.709429 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.209160 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.209581 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.709273 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.709351 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.709670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:51.709725 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:52.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.209549 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.209889 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:52.709740 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.709823 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.710180 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.209005 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.709060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:54.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:54.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:54.708969 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.709044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.709387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.209314 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.209382 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.209635 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.709021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:56.209074 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:56.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:56.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.709266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.709098 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.709195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.709513 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.208958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.209260 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.708954 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.709420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:58.709478 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:59.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:59.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.709259 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.709688 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.710034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:00.710088 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:01.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.209777 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.210034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:01.709784 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.709858 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.710223 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.209882 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.209960 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.210301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.708970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:03.209442 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:03.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.209135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.209531 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.709573 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.709992 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:05.209202 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.209287 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.209886 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:05.209937 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:05.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.709000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:06.209058 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:06.209140 1844089 node_ready.go:38] duration metric: took 6m0.000414768s for node "functional-373432" to be "Ready" ...
	I1124 09:53:06.212349 1844089 out.go:203] 
	W1124 09:53:06.215554 1844089 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1124 09:53:06.215587 1844089 out.go:285] * 
	W1124 09:53:06.217723 1844089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:53:06.220637 1844089 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971906805Z" level=info msg="Using the internal default seccomp profile"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971915314Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971921788Z" level=info msg="No blockio config file specified, blockio not configured"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971927203Z" level=info msg="RDT not available in the host system"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.971939298Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.972776158Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.972804301Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.972821204Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.973519051Z" level=info msg="Conmon does support the --sync option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.973546834Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.973688333Z" level=info msg="Updated default CNI network name to "
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.974254864Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.974668711Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 09:47:02 functional-373432 crio[6244]: time="2025-11-24T09:47:02.974738349Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030170117Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030217921Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030269803Z" level=info msg="Create NRI interface"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.03037405Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030382239Z" level=info msg="runtime interface created"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030396155Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030403704Z" level=info msg="runtime interface starting up..."
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.03041931Z" level=info msg="starting plugins..."
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030432398Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 09:47:03 functional-373432 crio[6244]: time="2025-11-24T09:47:03.030505859Z" level=info msg="No systemd watchdog enabled"
	Nov 24 09:47:03 functional-373432 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:53:10.550881    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:10.551599    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:10.553174    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:10.553625    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:10.554794    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:53:10 up  8:35,  0 user,  load average: 0.27, 0.22, 0.55
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 09:53:08 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:08 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Nov 24 09:53:08 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:08 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:08 functional-373432 kubelet[9433]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:08 functional-373432 kubelet[9433]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:08 functional-373432 kubelet[9433]: E1124 09:53:08.785428    9433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:08 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:08 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:09 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1144.
	Nov 24 09:53:09 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:09 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:09 functional-373432 kubelet[9455]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:09 functional-373432 kubelet[9455]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:09 functional-373432 kubelet[9455]: E1124 09:53:09.511569    9455 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:09 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:09 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:10 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1145.
	Nov 24 09:53:10 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:10 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:10 functional-373432 kubelet[9480]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:10 functional-373432 kubelet[9480]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:10 functional-373432 kubelet[9480]: E1124 09:53:10.273468    9480 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:10 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:10 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (340.331507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 kubectl -- --context functional-373432 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 kubectl -- --context functional-373432 get pods: exit status 1 (107.989495ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-373432 kubectl -- --context functional-373432 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (305.663529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 logs -n 25: (1.050070199s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr                                            │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ delete         │ -p functional-498341                                                                                                                              │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │ 24 Nov 25 09:38 UTC │
	│ start          │ -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │                     │
	│ start          │ -p functional-373432 --alsologtostderr -v=8                                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:latest                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add minikube-local-cache-test:functional-373432                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache delete minikube-local-cache-test:functional-373432                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl images                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ cache          │ functional-373432 cache reload                                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ kubectl        │ functional-373432 kubectl -- --context functional-373432 get pods                                                                                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:46:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:46:59.387016 1844089 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:46:59.387211 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387243 1844089 out.go:374] Setting ErrFile to fd 2...
	I1124 09:46:59.387263 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387557 1844089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:46:59.388008 1844089 out.go:368] Setting JSON to false
	I1124 09:46:59.388882 1844089 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30570,"bootTime":1763947050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:46:59.388979 1844089 start.go:143] virtualization:  
	I1124 09:46:59.392592 1844089 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:46:59.396303 1844089 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:46:59.396370 1844089 notify.go:221] Checking for updates...
	I1124 09:46:59.402093 1844089 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:46:59.405033 1844089 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:46:59.407908 1844089 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:46:59.411405 1844089 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:46:59.414441 1844089 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:46:59.417923 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:46:59.418109 1844089 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:46:59.451337 1844089 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:46:59.451452 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.507906 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.498692309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.508018 1844089 docker.go:319] overlay module found
	I1124 09:46:59.511186 1844089 out.go:179] * Using the docker driver based on existing profile
	I1124 09:46:59.514098 1844089 start.go:309] selected driver: docker
	I1124 09:46:59.514123 1844089 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.514235 1844089 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:46:59.514350 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.569823 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.559648119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.570237 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:46:59.570306 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:46:59.570363 1844089 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.573590 1844089 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:46:59.576497 1844089 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:46:59.579448 1844089 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:46:59.582547 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:46:59.582648 1844089 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:46:59.602755 1844089 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:46:59.602781 1844089 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:46:59.648405 1844089 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:46:59.826473 1844089 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:46:59.826636 1844089 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:46:59.826856 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:46:59.826893 1844089 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:46:59.826927 1844089 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:46:59.826975 1844089 start.go:364] duration metric: took 25.756µs to acquireMachinesLock for "functional-373432"
	I1124 09:46:59.826990 1844089 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:46:59.826996 1844089 fix.go:54] fixHost starting: 
	I1124 09:46:59.827258 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:46:59.843979 1844089 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:46:59.844011 1844089 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:46:59.847254 1844089 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:46:59.847299 1844089 machine.go:94] provisionDockerMachine start ...
	I1124 09:46:59.847379 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:46:59.872683 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:46:59.873034 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:46:59.873051 1844089 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:46:59.992797 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.044426 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.044454 1844089 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:47:00.044547 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.104810 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.105156 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.105170 1844089 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:47:00.386378 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.386611 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.409023 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.411110 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.411442 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.411467 1844089 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:00.595280 1844089 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595319 1844089 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595392 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:47:00.595381 1844089 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595403 1844089 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 139.325µs
	I1124 09:47:00.595412 1844089 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595423 1844089 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595434 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:47:00.595442 1844089 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 62.902µs
	I1124 09:47:00.595450 1844089 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595457 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:47:00.595463 1844089 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 41.207µs
	I1124 09:47:00.595469 1844089 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:47:00.595461 1844089 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595477 1844089 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595494 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:47:00.595500 1844089 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.394µs
	I1124 09:47:00.595507 1844089 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595510 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:47:00.595517 1844089 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595524 1844089 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.5µs
	I1124 09:47:00.595532 1844089 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595282 1844089 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595546 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:47:00.595552 1844089 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 36.923µs
	I1124 09:47:00.595556 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:47:00.595558 1844089 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:47:00.595562 1844089 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 302.437µs
	I1124 09:47:00.595572 1844089 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:47:00.595568 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:47:00.595581 1844089 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 263.856µs
	I1124 09:47:00.595587 1844089 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:47:00.595593 1844089 cache.go:87] Successfully saved all images to host disk.
	I1124 09:47:00.596331 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:00.596354 1844089 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:47:00.596379 1844089 ubuntu.go:190] setting up certificates
	I1124 09:47:00.596403 1844089 provision.go:84] configureAuth start
	I1124 09:47:00.596480 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:00.614763 1844089 provision.go:143] copyHostCerts
	I1124 09:47:00.614805 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614845 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:47:00.614865 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614942 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:47:00.615049 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615076 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:47:00.615081 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615111 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:47:00.615166 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615187 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:47:00.615191 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615218 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:47:00.615273 1844089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:47:00.746073 1844089 provision.go:177] copyRemoteCerts
	I1124 09:47:00.746146 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:00.746187 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.767050 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:00.873044 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1124 09:47:00.873153 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:00.891124 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1124 09:47:00.891207 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:47:00.909032 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1124 09:47:00.909209 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:47:00.927426 1844089 provision.go:87] duration metric: took 330.992349ms to configureAuth
	I1124 09:47:00.927482 1844089 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:47:00.927686 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:00.927808 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.945584 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.945906 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.945929 1844089 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:01.279482 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:01.279511 1844089 machine.go:97] duration metric: took 1.432203745s to provisionDockerMachine
	I1124 09:47:01.279522 1844089 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:47:01.279534 1844089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:01.279608 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:01.279659 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.306223 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.413310 1844089 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:01.416834 1844089 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1124 09:47:01.416855 1844089 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1124 09:47:01.416859 1844089 command_runner.go:130] > VERSION_ID="12"
	I1124 09:47:01.416863 1844089 command_runner.go:130] > VERSION="12 (bookworm)"
	I1124 09:47:01.416868 1844089 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1124 09:47:01.416884 1844089 command_runner.go:130] > ID=debian
	I1124 09:47:01.416889 1844089 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1124 09:47:01.416894 1844089 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1124 09:47:01.416900 1844089 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1124 09:47:01.416956 1844089 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:47:01.416971 1844089 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:47:01.416982 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:47:01.417038 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:47:01.417141 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:47:01.417149 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /etc/ssl/certs/18067042.pem
	I1124 09:47:01.417225 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:47:01.417238 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> /etc/test/nested/copy/1806704/hosts
	I1124 09:47:01.417285 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:47:01.425057 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:01.443829 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:47:01.461688 1844089 start.go:296] duration metric: took 182.151565ms for postStartSetup
	I1124 09:47:01.461806 1844089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:47:01.461866 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.478949 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.582285 1844089 command_runner.go:130] > 19%
	I1124 09:47:01.582359 1844089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:47:01.587262 1844089 command_runner.go:130] > 159G
	I1124 09:47:01.587296 1844089 fix.go:56] duration metric: took 1.760298367s for fixHost
	I1124 09:47:01.587308 1844089 start.go:83] releasing machines lock for "functional-373432", held for 1.76032423s
	I1124 09:47:01.587385 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:01.605227 1844089 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:01.605290 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.605558 1844089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:01.605651 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.623897 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.640948 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.724713 1844089 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1763789673-21948", "minikube_version": "v1.37.0", "commit": "2996c7ec74d570fa8ab37e6f4f8813150d0c7473"}
	I1124 09:47:01.724863 1844089 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:01.812522 1844089 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1124 09:47:01.816014 1844089 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1124 09:47:01.816053 1844089 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1124 09:47:01.816128 1844089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:01.851397 1844089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1124 09:47:01.855673 1844089 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1124 09:47:01.855841 1844089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:01.855908 1844089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:01.863705 1844089 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:47:01.863730 1844089 start.go:496] detecting cgroup driver to use...
	I1124 09:47:01.863762 1844089 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:47:01.863809 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:01.879426 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:01.892902 1844089 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:01.892974 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:01.908995 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:01.922294 1844089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:02.052541 1844089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:02.189051 1844089 docker.go:234] disabling docker service ...
	I1124 09:47:02.189218 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:02.205065 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:02.219126 1844089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:02.329712 1844089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:02.449311 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:02.462019 1844089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:02.474641 1844089 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1124 09:47:02.476035 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:02.633334 1844089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:02.633408 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.642946 1844089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:02.643028 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.652272 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.661578 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.670499 1844089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:02.678769 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.688087 1844089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.696980 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.705967 1844089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:02.713426 1844089 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1124 09:47:02.713510 1844089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:02.720989 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:02.841969 1844089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:03.036830 1844089 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:03.036905 1844089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:03.040587 1844089 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1124 09:47:03.040611 1844089 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1124 09:47:03.040618 1844089 command_runner.go:130] > Device: 0,72	Inode: 1805        Links: 1
	I1124 09:47:03.040633 1844089 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:03.040639 1844089 command_runner.go:130] > Access: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040645 1844089 command_runner.go:130] > Modify: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040654 1844089 command_runner.go:130] > Change: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040658 1844089 command_runner.go:130] >  Birth: -
	I1124 09:47:03.041299 1844089 start.go:564] Will wait 60s for crictl version
	I1124 09:47:03.041375 1844089 ssh_runner.go:195] Run: which crictl
	I1124 09:47:03.044736 1844089 command_runner.go:130] > /usr/local/bin/crictl
	I1124 09:47:03.045405 1844089 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:47:03.072144 1844089 command_runner.go:130] > Version:  0.1.0
	I1124 09:47:03.072339 1844089 command_runner.go:130] > RuntimeName:  cri-o
	I1124 09:47:03.072489 1844089 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1124 09:47:03.072634 1844089 command_runner.go:130] > RuntimeApiVersion:  v1
	I1124 09:47:03.075078 1844089 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:47:03.075181 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.102664 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.102689 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.102697 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.102702 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.102708 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.102713 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.102717 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.102722 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.102726 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.102730 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.102734 1844089 command_runner.go:130] >      static
	I1124 09:47:03.102737 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.102741 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.102745 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.102753 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.102757 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.102763 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.102768 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.102772 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.102781 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.104732 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.133953 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.133980 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.133987 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.133991 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.133996 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.134000 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.134004 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.134008 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.134012 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.134016 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.134019 1844089 command_runner.go:130] >      static
	I1124 09:47:03.134023 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.134027 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.134031 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.134039 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.134043 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.134050 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.134056 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.134060 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.134068 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.140942 1844089 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:47:03.143873 1844089 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:47:03.160952 1844089 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:03.165052 1844089 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1124 09:47:03.165287 1844089 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:03.165490 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.325050 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.479106 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.632699 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:03.632773 1844089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:03.664623 1844089 command_runner.go:130] > {
	I1124 09:47:03.664647 1844089 command_runner.go:130] >   "images":  [
	I1124 09:47:03.664652 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664661 1844089 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1124 09:47:03.664666 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664683 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1124 09:47:03.664695 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664705 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664715 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1124 09:47:03.664722 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664727 1844089 command_runner.go:130] >       "size":  "29035622",
	I1124 09:47:03.664734 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664738 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664746 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664750 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664760 1844089 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1124 09:47:03.664768 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664775 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1124 09:47:03.664780 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664788 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664797 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1124 09:47:03.664804 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664808 1844089 command_runner.go:130] >       "size":  "74488375",
	I1124 09:47:03.664816 1844089 command_runner.go:130] >       "username":  "nonroot",
	I1124 09:47:03.664820 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664827 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664831 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664838 1844089 command_runner.go:130] >       "id":  "1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca",
	I1124 09:47:03.664845 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664851 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.24-0"
	I1124 09:47:03.664855 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664859 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664873 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:62cae8d38d7e1187ef2841ebc55bef1c5a46f21a69675fae8351f92d3a3e9bc6"
	I1124 09:47:03.664880 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664885 1844089 command_runner.go:130] >       "size":  "63341525",
	I1124 09:47:03.664892 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.664896 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.664904 1844089 command_runner.go:130] >       },
	I1124 09:47:03.664908 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664923 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664929 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664932 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664939 1844089 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1124 09:47:03.664947 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664951 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1124 09:47:03.664959 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664963 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664974 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1124 09:47:03.664987 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1124 09:47:03.664994 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664999 1844089 command_runner.go:130] >       "size":  "60857170",
	I1124 09:47:03.665002 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665009 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665013 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665016 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665020 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665024 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665028 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665039 1844089 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1124 09:47:03.665043 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665053 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1124 09:47:03.665057 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665065 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665078 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1124 09:47:03.665085 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665089 1844089 command_runner.go:130] >       "size":  "84947242",
	I1124 09:47:03.665093 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665131 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665140 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665144 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665148 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665155 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665163 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665174 1844089 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1124 09:47:03.665181 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665187 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1124 09:47:03.665195 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665198 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665206 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1124 09:47:03.665213 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665217 1844089 command_runner.go:130] >       "size":  "72167568",
	I1124 09:47:03.665221 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665229 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665232 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665236 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665244 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665247 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665254 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665262 1844089 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1124 09:47:03.665269 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665275 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1124 09:47:03.665278 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665285 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665292 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1124 09:47:03.665299 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665304 1844089 command_runner.go:130] >       "size":  "74105124",
	I1124 09:47:03.665308 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665315 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665319 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665326 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665333 1844089 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1124 09:47:03.665340 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665346 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1124 09:47:03.665353 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665357 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665369 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1124 09:47:03.665376 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665380 1844089 command_runner.go:130] >       "size":  "49819792",
	I1124 09:47:03.665384 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665388 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665396 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665401 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665405 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665412 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665415 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665426 1844089 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1124 09:47:03.665434 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665439 1844089 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.665442 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665446 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665456 1844089 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1124 09:47:03.665460 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665469 1844089 command_runner.go:130] >       "size":  "517328",
	I1124 09:47:03.665473 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665478 1844089 command_runner.go:130] >         "value":  "65535"
	I1124 09:47:03.665485 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665489 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665499 1844089 command_runner.go:130] >       "pinned":  true
	I1124 09:47:03.665506 1844089 command_runner.go:130] >     }
	I1124 09:47:03.665510 1844089 command_runner.go:130] >   ]
	I1124 09:47:03.665517 1844089 command_runner.go:130] > }
	I1124 09:47:03.667798 1844089 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:47:03.667821 1844089 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:47:03.667827 1844089 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:47:03.667924 1844089 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:03.668011 1844089 ssh_runner.go:195] Run: crio config
	I1124 09:47:03.726362 1844089 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1124 09:47:03.726390 1844089 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1124 09:47:03.726403 1844089 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1124 09:47:03.726416 1844089 command_runner.go:130] > #
	I1124 09:47:03.726461 1844089 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1124 09:47:03.726469 1844089 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1124 09:47:03.726481 1844089 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1124 09:47:03.726488 1844089 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1124 09:47:03.726498 1844089 command_runner.go:130] > # reload'.
	I1124 09:47:03.726518 1844089 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1124 09:47:03.726529 1844089 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1124 09:47:03.726536 1844089 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1124 09:47:03.726563 1844089 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1124 09:47:03.726573 1844089 command_runner.go:130] > [crio]
	I1124 09:47:03.726579 1844089 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1124 09:47:03.726585 1844089 command_runner.go:130] > # containers images, in this directory.
	I1124 09:47:03.727202 1844089 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1124 09:47:03.727221 1844089 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1124 09:47:03.727766 1844089 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1124 09:47:03.727795 1844089 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1124 09:47:03.728310 1844089 command_runner.go:130] > # imagestore = ""
	I1124 09:47:03.728328 1844089 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1124 09:47:03.728337 1844089 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1124 09:47:03.728921 1844089 command_runner.go:130] > # storage_driver = "overlay"
	I1124 09:47:03.728938 1844089 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1124 09:47:03.728946 1844089 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1124 09:47:03.729270 1844089 command_runner.go:130] > # storage_option = [
	I1124 09:47:03.729595 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.729612 1844089 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1124 09:47:03.729620 1844089 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1124 09:47:03.730268 1844089 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1124 09:47:03.730286 1844089 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1124 09:47:03.730295 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1124 09:47:03.730299 1844089 command_runner.go:130] > # always happen on a node reboot
	I1124 09:47:03.730901 1844089 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1124 09:47:03.730939 1844089 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1124 09:47:03.730951 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1124 09:47:03.730957 1844089 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1124 09:47:03.731426 1844089 command_runner.go:130] > # version_file_persist = ""
	I1124 09:47:03.731444 1844089 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1124 09:47:03.731453 1844089 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1124 09:47:03.732044 1844089 command_runner.go:130] > # internal_wipe = true
	I1124 09:47:03.732064 1844089 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1124 09:47:03.732071 1844089 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1124 09:47:03.732663 1844089 command_runner.go:130] > # internal_repair = true
	I1124 09:47:03.732708 1844089 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1124 09:47:03.732717 1844089 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1124 09:47:03.732723 1844089 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1124 09:47:03.733344 1844089 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1124 09:47:03.733360 1844089 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1124 09:47:03.733364 1844089 command_runner.go:130] > [crio.api]
	I1124 09:47:03.733370 1844089 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1124 09:47:03.733954 1844089 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1124 09:47:03.733970 1844089 command_runner.go:130] > # IP address on which the stream server will listen.
	I1124 09:47:03.734597 1844089 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1124 09:47:03.734618 1844089 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1124 09:47:03.734638 1844089 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1124 09:47:03.735322 1844089 command_runner.go:130] > # stream_port = "0"
	I1124 09:47:03.735342 1844089 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1124 09:47:03.735920 1844089 command_runner.go:130] > # stream_enable_tls = false
	I1124 09:47:03.735936 1844089 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1124 09:47:03.736379 1844089 command_runner.go:130] > # stream_idle_timeout = ""
	I1124 09:47:03.736427 1844089 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1124 09:47:03.736442 1844089 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1124 09:47:03.736931 1844089 command_runner.go:130] > # stream_tls_cert = ""
	I1124 09:47:03.736947 1844089 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1124 09:47:03.736954 1844089 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1124 09:47:03.737422 1844089 command_runner.go:130] > # stream_tls_key = ""
	I1124 09:47:03.737439 1844089 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1124 09:47:03.737447 1844089 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1124 09:47:03.737466 1844089 command_runner.go:130] > # automatically pick up the changes.
	I1124 09:47:03.737919 1844089 command_runner.go:130] > # stream_tls_ca = ""
	I1124 09:47:03.737973 1844089 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.738690 1844089 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1124 09:47:03.738709 1844089 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.739334 1844089 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1124 09:47:03.739351 1844089 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1124 09:47:03.739358 1844089 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1124 09:47:03.739383 1844089 command_runner.go:130] > [crio.runtime]
	I1124 09:47:03.739395 1844089 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1124 09:47:03.739402 1844089 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1124 09:47:03.739406 1844089 command_runner.go:130] > # "nofile=1024:2048"
	I1124 09:47:03.739432 1844089 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1124 09:47:03.739736 1844089 command_runner.go:130] > # default_ulimits = [
	I1124 09:47:03.740060 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.740075 1844089 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1124 09:47:03.740677 1844089 command_runner.go:130] > # no_pivot = false
	I1124 09:47:03.740693 1844089 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1124 09:47:03.740700 1844089 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1124 09:47:03.741305 1844089 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1124 09:47:03.741322 1844089 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1124 09:47:03.741328 1844089 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1124 09:47:03.741356 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.741816 1844089 command_runner.go:130] > # conmon = ""
	I1124 09:47:03.741833 1844089 command_runner.go:130] > # Cgroup setting for conmon
	I1124 09:47:03.741841 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1124 09:47:03.742193 1844089 command_runner.go:130] > conmon_cgroup = "pod"
	I1124 09:47:03.742211 1844089 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1124 09:47:03.742237 1844089 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1124 09:47:03.742253 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.742594 1844089 command_runner.go:130] > # conmon_env = [
	I1124 09:47:03.742962 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.742977 1844089 command_runner.go:130] > # Additional environment variables to set for all the
	I1124 09:47:03.742984 1844089 command_runner.go:130] > # containers. These are overridden if set in the
	I1124 09:47:03.742990 1844089 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1124 09:47:03.743288 1844089 command_runner.go:130] > # default_env = [
	I1124 09:47:03.743607 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.743619 1844089 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1124 09:47:03.743646 1844089 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1124 09:47:03.744217 1844089 command_runner.go:130] > # selinux = false
	I1124 09:47:03.744234 1844089 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1124 09:47:03.744279 1844089 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1124 09:47:03.744293 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.744768 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.744784 1844089 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1124 09:47:03.744790 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745254 1844089 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1124 09:47:03.745273 1844089 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1124 09:47:03.745281 1844089 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1124 09:47:03.745308 1844089 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1124 09:47:03.745322 1844089 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1124 09:47:03.745328 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745934 1844089 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1124 09:47:03.745975 1844089 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1124 09:47:03.745989 1844089 command_runner.go:130] > # the cgroup blockio controller.
	I1124 09:47:03.746500 1844089 command_runner.go:130] > # blockio_config_file = ""
	I1124 09:47:03.746515 1844089 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1124 09:47:03.746541 1844089 command_runner.go:130] > # blockio parameters.
	I1124 09:47:03.747165 1844089 command_runner.go:130] > # blockio_reload = false
	I1124 09:47:03.747182 1844089 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1124 09:47:03.747187 1844089 command_runner.go:130] > # irqbalance daemon.
	I1124 09:47:03.747784 1844089 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1124 09:47:03.747803 1844089 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1124 09:47:03.747830 1844089 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1124 09:47:03.747843 1844089 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1124 09:47:03.748453 1844089 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1124 09:47:03.748471 1844089 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1124 09:47:03.748496 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.748966 1844089 command_runner.go:130] > # rdt_config_file = ""
	I1124 09:47:03.748982 1844089 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1124 09:47:03.749348 1844089 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1124 09:47:03.749364 1844089 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1124 09:47:03.749770 1844089 command_runner.go:130] > # separate_pull_cgroup = ""
	I1124 09:47:03.749788 1844089 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1124 09:47:03.749796 1844089 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1124 09:47:03.749820 1844089 command_runner.go:130] > # will be added.
	I1124 09:47:03.749833 1844089 command_runner.go:130] > # default_capabilities = [
	I1124 09:47:03.750067 1844089 command_runner.go:130] > # 	"CHOWN",
	I1124 09:47:03.750401 1844089 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1124 09:47:03.750646 1844089 command_runner.go:130] > # 	"FSETID",
	I1124 09:47:03.750659 1844089 command_runner.go:130] > # 	"FOWNER",
	I1124 09:47:03.750665 1844089 command_runner.go:130] > # 	"SETGID",
	I1124 09:47:03.750669 1844089 command_runner.go:130] > # 	"SETUID",
	I1124 09:47:03.750725 1844089 command_runner.go:130] > # 	"SETPCAP",
	I1124 09:47:03.750739 1844089 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1124 09:47:03.750745 1844089 command_runner.go:130] > # 	"KILL",
	I1124 09:47:03.750755 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.750774 1844089 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1124 09:47:03.750785 1844089 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1124 09:47:03.750991 1844089 command_runner.go:130] > # add_inheritable_capabilities = false
	I1124 09:47:03.751004 1844089 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1124 09:47:03.751023 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751034 1844089 command_runner.go:130] > default_sysctls = [
	I1124 09:47:03.751219 1844089 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1124 09:47:03.751480 1844089 command_runner.go:130] > ]
	I1124 09:47:03.751494 1844089 command_runner.go:130] > # List of devices on the host that a
	I1124 09:47:03.751501 1844089 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1124 09:47:03.751522 1844089 command_runner.go:130] > # allowed_devices = [
	I1124 09:47:03.751532 1844089 command_runner.go:130] > # 	"/dev/fuse",
	I1124 09:47:03.751536 1844089 command_runner.go:130] > # 	"/dev/net/tun",
	I1124 09:47:03.751539 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751545 1844089 command_runner.go:130] > # List of additional devices. specified as
	I1124 09:47:03.751558 1844089 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1124 09:47:03.751576 1844089 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1124 09:47:03.751614 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751625 1844089 command_runner.go:130] > # additional_devices = [
	I1124 09:47:03.751802 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751816 1844089 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1124 09:47:03.752056 1844089 command_runner.go:130] > # cdi_spec_dirs = [
	I1124 09:47:03.752288 1844089 command_runner.go:130] > # 	"/etc/cdi",
	I1124 09:47:03.752302 1844089 command_runner.go:130] > # 	"/var/run/cdi",
	I1124 09:47:03.752307 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752313 1844089 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1124 09:47:03.752348 1844089 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1124 09:47:03.752353 1844089 command_runner.go:130] > # Defaults to false.
	I1124 09:47:03.752752 1844089 command_runner.go:130] > # device_ownership_from_security_context = false
	I1124 09:47:03.752770 1844089 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1124 09:47:03.752778 1844089 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1124 09:47:03.752782 1844089 command_runner.go:130] > # hooks_dir = [
	I1124 09:47:03.752808 1844089 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1124 09:47:03.752819 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752826 1844089 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1124 09:47:03.752833 1844089 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1124 09:47:03.752842 1844089 command_runner.go:130] > # its default mounts from the following two files:
	I1124 09:47:03.752845 1844089 command_runner.go:130] > #
	I1124 09:47:03.752852 1844089 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1124 09:47:03.752858 1844089 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1124 09:47:03.752881 1844089 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1124 09:47:03.752891 1844089 command_runner.go:130] > #
	I1124 09:47:03.752897 1844089 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1124 09:47:03.752913 1844089 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1124 09:47:03.752928 1844089 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1124 09:47:03.752934 1844089 command_runner.go:130] > #      only add mounts it finds in this file.
	I1124 09:47:03.752937 1844089 command_runner.go:130] > #
	I1124 09:47:03.752941 1844089 command_runner.go:130] > # default_mounts_file = ""
	I1124 09:47:03.752946 1844089 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1124 09:47:03.752955 1844089 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1124 09:47:03.753190 1844089 command_runner.go:130] > # pids_limit = -1
	I1124 09:47:03.753207 1844089 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1124 09:47:03.753245 1844089 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1124 09:47:03.753260 1844089 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1124 09:47:03.753269 1844089 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1124 09:47:03.753278 1844089 command_runner.go:130] > # log_size_max = -1
	I1124 09:47:03.753287 1844089 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1124 09:47:03.753296 1844089 command_runner.go:130] > # log_to_journald = false
	I1124 09:47:03.753313 1844089 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1124 09:47:03.753722 1844089 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1124 09:47:03.753734 1844089 command_runner.go:130] > # Path to directory for container attach sockets.
	I1124 09:47:03.753771 1844089 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1124 09:47:03.753785 1844089 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1124 09:47:03.753789 1844089 command_runner.go:130] > # bind_mount_prefix = ""
	I1124 09:47:03.753796 1844089 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1124 09:47:03.753804 1844089 command_runner.go:130] > # read_only = false
	I1124 09:47:03.753810 1844089 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1124 09:47:03.753817 1844089 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1124 09:47:03.753824 1844089 command_runner.go:130] > # live configuration reload.
	I1124 09:47:03.753828 1844089 command_runner.go:130] > # log_level = "info"
	I1124 09:47:03.753845 1844089 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1124 09:47:03.753857 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.754025 1844089 command_runner.go:130] > # log_filter = ""
	I1124 09:47:03.754041 1844089 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754049 1844089 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1124 09:47:03.754066 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754079 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754487 1844089 command_runner.go:130] > # uid_mappings = ""
	I1124 09:47:03.754504 1844089 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754512 1844089 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1124 09:47:03.754516 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754547 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754559 1844089 command_runner.go:130] > # gid_mappings = ""
	I1124 09:47:03.754565 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1124 09:47:03.754572 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754582 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754590 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754595 1844089 command_runner.go:130] > # minimum_mappable_uid = -1
	I1124 09:47:03.754627 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1124 09:47:03.754641 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754648 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754662 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754929 1844089 command_runner.go:130] > # minimum_mappable_gid = -1
	I1124 09:47:03.754942 1844089 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1124 09:47:03.754970 1844089 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1124 09:47:03.754983 1844089 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1124 09:47:03.754989 1844089 command_runner.go:130] > # ctr_stop_timeout = 30
	I1124 09:47:03.754994 1844089 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1124 09:47:03.755006 1844089 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1124 09:47:03.755011 1844089 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1124 09:47:03.755016 1844089 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1124 09:47:03.755021 1844089 command_runner.go:130] > # drop_infra_ctr = true
	I1124 09:47:03.755048 1844089 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1124 09:47:03.755061 1844089 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1124 09:47:03.755080 1844089 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1124 09:47:03.755090 1844089 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1124 09:47:03.755098 1844089 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1124 09:47:03.755104 1844089 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1124 09:47:03.755110 1844089 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1124 09:47:03.755118 1844089 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1124 09:47:03.755122 1844089 command_runner.go:130] > # shared_cpuset = ""
	I1124 09:47:03.755135 1844089 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1124 09:47:03.755143 1844089 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1124 09:47:03.755164 1844089 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1124 09:47:03.755182 1844089 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1124 09:47:03.755369 1844089 command_runner.go:130] > # pinns_path = ""
	I1124 09:47:03.755383 1844089 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1124 09:47:03.755391 1844089 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1124 09:47:03.755617 1844089 command_runner.go:130] > # enable_criu_support = true
	I1124 09:47:03.755632 1844089 command_runner.go:130] > # Enable/disable the generation of the container,
	I1124 09:47:03.755639 1844089 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1124 09:47:03.755935 1844089 command_runner.go:130] > # enable_pod_events = false
	I1124 09:47:03.755951 1844089 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1124 09:47:03.755976 1844089 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1124 09:47:03.755988 1844089 command_runner.go:130] > # default_runtime = "crun"
	I1124 09:47:03.756007 1844089 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1124 09:47:03.756063 1844089 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1124 09:47:03.756088 1844089 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1124 09:47:03.756099 1844089 command_runner.go:130] > # creation as a file is not desired either.
	I1124 09:47:03.756108 1844089 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1124 09:47:03.756127 1844089 command_runner.go:130] > # the hostname is being managed dynamically.
	I1124 09:47:03.756133 1844089 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1124 09:47:03.756166 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.756181 1844089 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1124 09:47:03.756199 1844089 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1124 09:47:03.756211 1844089 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1124 09:47:03.756217 1844089 command_runner.go:130] > # Each entry in the table should follow the format:
	I1124 09:47:03.756220 1844089 command_runner.go:130] > #
	I1124 09:47:03.756230 1844089 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1124 09:47:03.756235 1844089 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1124 09:47:03.756244 1844089 command_runner.go:130] > # runtime_type = "oci"
	I1124 09:47:03.756248 1844089 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1124 09:47:03.756253 1844089 command_runner.go:130] > # inherit_default_runtime = false
	I1124 09:47:03.756258 1844089 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1124 09:47:03.756285 1844089 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1124 09:47:03.756297 1844089 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1124 09:47:03.756301 1844089 command_runner.go:130] > # monitor_env = []
	I1124 09:47:03.756306 1844089 command_runner.go:130] > # privileged_without_host_devices = false
	I1124 09:47:03.756313 1844089 command_runner.go:130] > # allowed_annotations = []
	I1124 09:47:03.756319 1844089 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1124 09:47:03.756330 1844089 command_runner.go:130] > # no_sync_log = false
	I1124 09:47:03.756335 1844089 command_runner.go:130] > # default_annotations = {}
	I1124 09:47:03.756339 1844089 command_runner.go:130] > # stream_websockets = false
	I1124 09:47:03.756349 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.756390 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.756402 1844089 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1124 09:47:03.756409 1844089 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1124 09:47:03.756416 1844089 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1124 09:47:03.756427 1844089 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1124 09:47:03.756448 1844089 command_runner.go:130] > #   in $PATH.
	I1124 09:47:03.756456 1844089 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1124 09:47:03.756461 1844089 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1124 09:47:03.756468 1844089 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1124 09:47:03.756477 1844089 command_runner.go:130] > #   state.
	I1124 09:47:03.756489 1844089 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1124 09:47:03.756495 1844089 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1124 09:47:03.756515 1844089 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1124 09:47:03.756528 1844089 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1124 09:47:03.756534 1844089 command_runner.go:130] > #   the values from the default runtime on load time.
	I1124 09:47:03.756542 1844089 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1124 09:47:03.756551 1844089 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1124 09:47:03.756557 1844089 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1124 09:47:03.756564 1844089 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1124 09:47:03.756571 1844089 command_runner.go:130] > #   The currently recognized values are:
	I1124 09:47:03.756579 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1124 09:47:03.756608 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1124 09:47:03.756621 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1124 09:47:03.756627 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1124 09:47:03.756635 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1124 09:47:03.756647 1844089 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1124 09:47:03.756654 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1124 09:47:03.756661 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1124 09:47:03.756671 1844089 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1124 09:47:03.756687 1844089 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1124 09:47:03.756700 1844089 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1124 09:47:03.756720 1844089 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1124 09:47:03.756731 1844089 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1124 09:47:03.756738 1844089 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1124 09:47:03.756751 1844089 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1124 09:47:03.756759 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1124 09:47:03.756769 1844089 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1124 09:47:03.756774 1844089 command_runner.go:130] > #   deprecated option "conmon".
	I1124 09:47:03.756781 1844089 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1124 09:47:03.756803 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1124 09:47:03.756820 1844089 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1124 09:47:03.756831 1844089 command_runner.go:130] > #   should be moved to the container's cgroup
	I1124 09:47:03.756843 1844089 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1124 09:47:03.756853 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1124 09:47:03.756862 1844089 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1124 09:47:03.756870 1844089 command_runner.go:130] > #   conmon-rs by using:
	I1124 09:47:03.756878 1844089 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1124 09:47:03.756886 1844089 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1124 09:47:03.756907 1844089 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1124 09:47:03.756926 1844089 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1124 09:47:03.756938 1844089 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1124 09:47:03.756945 1844089 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1124 09:47:03.756958 1844089 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1124 09:47:03.756963 1844089 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1124 09:47:03.756972 1844089 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1124 09:47:03.756984 1844089 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1124 09:47:03.756999 1844089 command_runner.go:130] > #   when a machine crash happens.
	I1124 09:47:03.757012 1844089 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1124 09:47:03.757021 1844089 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1124 09:47:03.757033 1844089 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1124 09:47:03.757038 1844089 command_runner.go:130] > #   seccomp profile for the runtime.
	I1124 09:47:03.757047 1844089 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1124 09:47:03.757058 1844089 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1124 09:47:03.757076 1844089 command_runner.go:130] > #
	I1124 09:47:03.757087 1844089 command_runner.go:130] > # Using the seccomp notifier feature:
	I1124 09:47:03.757091 1844089 command_runner.go:130] > #
	I1124 09:47:03.757115 1844089 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1124 09:47:03.757130 1844089 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1124 09:47:03.757134 1844089 command_runner.go:130] > #
	I1124 09:47:03.757141 1844089 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1124 09:47:03.757151 1844089 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1124 09:47:03.757154 1844089 command_runner.go:130] > #
	I1124 09:47:03.757165 1844089 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1124 09:47:03.757172 1844089 command_runner.go:130] > # feature.
	I1124 09:47:03.757175 1844089 command_runner.go:130] > #
	I1124 09:47:03.757195 1844089 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1124 09:47:03.757204 1844089 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1124 09:47:03.757220 1844089 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1124 09:47:03.757233 1844089 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1124 09:47:03.757239 1844089 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1124 09:47:03.757247 1844089 command_runner.go:130] > #
	I1124 09:47:03.757258 1844089 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1124 09:47:03.757268 1844089 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1124 09:47:03.757271 1844089 command_runner.go:130] > #
	I1124 09:47:03.757277 1844089 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1124 09:47:03.757283 1844089 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1124 09:47:03.757298 1844089 command_runner.go:130] > #
	I1124 09:47:03.757320 1844089 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1124 09:47:03.757333 1844089 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1124 09:47:03.757341 1844089 command_runner.go:130] > # limitation.
	I1124 09:47:03.757617 1844089 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1124 09:47:03.757630 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1124 09:47:03.757635 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.757639 1844089 command_runner.go:130] > runtime_root = "/run/crun"
	I1124 09:47:03.757643 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.757670 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.757675 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.757680 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.757690 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.757695 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.757700 1844089 command_runner.go:130] > allowed_annotations = [
	I1124 09:47:03.757954 1844089 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1124 09:47:03.757971 1844089 command_runner.go:130] > ]
	I1124 09:47:03.757978 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.757982 1844089 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1124 09:47:03.758003 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1124 09:47:03.758013 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.758018 1844089 command_runner.go:130] > runtime_root = "/run/runc"
	I1124 09:47:03.758023 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.758033 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.758037 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.758042 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.758047 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.758051 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.758456 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.758471 1844089 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1124 09:47:03.758477 1844089 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1124 09:47:03.758504 1844089 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1124 09:47:03.758514 1844089 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1124 09:47:03.758525 1844089 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1124 09:47:03.758550 1844089 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1124 09:47:03.758572 1844089 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1124 09:47:03.758585 1844089 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1124 09:47:03.758595 1844089 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1124 09:47:03.758608 1844089 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1124 09:47:03.758614 1844089 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1124 09:47:03.758621 1844089 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1124 09:47:03.758629 1844089 command_runner.go:130] > # Example:
	I1124 09:47:03.758634 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1124 09:47:03.758650 1844089 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1124 09:47:03.758663 1844089 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1124 09:47:03.758670 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1124 09:47:03.758684 1844089 command_runner.go:130] > # cpuset = "0-1"
	I1124 09:47:03.758691 1844089 command_runner.go:130] > # cpushares = "5"
	I1124 09:47:03.758695 1844089 command_runner.go:130] > # cpuquota = "1000"
	I1124 09:47:03.758700 1844089 command_runner.go:130] > # cpuperiod = "100000"
	I1124 09:47:03.758703 1844089 command_runner.go:130] > # cpulimit = "35"
	I1124 09:47:03.758714 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.758719 1844089 command_runner.go:130] > # The workload name is workload-type.
	I1124 09:47:03.758726 1844089 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1124 09:47:03.758738 1844089 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1124 09:47:03.758744 1844089 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1124 09:47:03.758763 1844089 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1124 09:47:03.758772 1844089 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1124 09:47:03.758787 1844089 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1124 09:47:03.758800 1844089 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1124 09:47:03.758805 1844089 command_runner.go:130] > # Default value is set to true
	I1124 09:47:03.758816 1844089 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1124 09:47:03.758822 1844089 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1124 09:47:03.758827 1844089 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1124 09:47:03.758837 1844089 command_runner.go:130] > # Default value is set to 'false'
	I1124 09:47:03.758841 1844089 command_runner.go:130] > # disable_hostport_mapping = false
	I1124 09:47:03.758846 1844089 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1124 09:47:03.758869 1844089 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1124 09:47:03.759115 1844089 command_runner.go:130] > # timezone = ""
	I1124 09:47:03.759131 1844089 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1124 09:47:03.759134 1844089 command_runner.go:130] > #
	I1124 09:47:03.759141 1844089 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1124 09:47:03.759163 1844089 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1124 09:47:03.759174 1844089 command_runner.go:130] > [crio.image]
	I1124 09:47:03.759180 1844089 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1124 09:47:03.759194 1844089 command_runner.go:130] > # default_transport = "docker://"
	I1124 09:47:03.759204 1844089 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1124 09:47:03.759211 1844089 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759215 1844089 command_runner.go:130] > # global_auth_file = ""
	I1124 09:47:03.759237 1844089 command_runner.go:130] > # The image used to instantiate infra containers.
	I1124 09:47:03.759259 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759457 1844089 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.759477 1844089 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1124 09:47:03.759497 1844089 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759511 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759702 1844089 command_runner.go:130] > # pause_image_auth_file = ""
	I1124 09:47:03.759716 1844089 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1124 09:47:03.759723 1844089 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1124 09:47:03.759742 1844089 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1124 09:47:03.759757 1844089 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1124 09:47:03.760047 1844089 command_runner.go:130] > # pause_command = "/pause"
	I1124 09:47:03.760064 1844089 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1124 09:47:03.760071 1844089 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1124 09:47:03.760077 1844089 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1124 09:47:03.760108 1844089 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1124 09:47:03.760115 1844089 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1124 09:47:03.760126 1844089 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1124 09:47:03.760131 1844089 command_runner.go:130] > # pinned_images = [
	I1124 09:47:03.760134 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760140 1844089 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1124 09:47:03.760146 1844089 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1124 09:47:03.760157 1844089 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1124 09:47:03.760175 1844089 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1124 09:47:03.760186 1844089 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1124 09:47:03.760191 1844089 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1124 09:47:03.760197 1844089 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1124 09:47:03.760209 1844089 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1124 09:47:03.760216 1844089 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1124 09:47:03.760225 1844089 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1124 09:47:03.760231 1844089 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1124 09:47:03.760246 1844089 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1124 09:47:03.760260 1844089 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1124 09:47:03.760282 1844089 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1124 09:47:03.760292 1844089 command_runner.go:130] > # changing them here.
	I1124 09:47:03.760298 1844089 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1124 09:47:03.760302 1844089 command_runner.go:130] > # insecure_registries = [
	I1124 09:47:03.760312 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760318 1844089 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1124 09:47:03.760329 1844089 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1124 09:47:03.760704 1844089 command_runner.go:130] > # image_volumes = "mkdir"
	I1124 09:47:03.760720 1844089 command_runner.go:130] > # Temporary directory to use for storing big files
	I1124 09:47:03.760964 1844089 command_runner.go:130] > # big_files_temporary_dir = ""
	I1124 09:47:03.760980 1844089 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1124 09:47:03.760987 1844089 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1124 09:47:03.760992 1844089 command_runner.go:130] > # auto_reload_registries = false
	I1124 09:47:03.761030 1844089 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1124 09:47:03.761047 1844089 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1124 09:47:03.761054 1844089 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1124 09:47:03.761232 1844089 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1124 09:47:03.761247 1844089 command_runner.go:130] > # The mode of short name resolution.
	I1124 09:47:03.761255 1844089 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1124 09:47:03.761263 1844089 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1124 09:47:03.761289 1844089 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1124 09:47:03.761475 1844089 command_runner.go:130] > # short_name_mode = "enforcing"
	I1124 09:47:03.761491 1844089 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1124 09:47:03.761498 1844089 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1124 09:47:03.761714 1844089 command_runner.go:130] > # oci_artifact_mount_support = true
	I1124 09:47:03.761730 1844089 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1124 09:47:03.761735 1844089 command_runner.go:130] > # CNI plugins.
	I1124 09:47:03.761738 1844089 command_runner.go:130] > [crio.network]
	I1124 09:47:03.761777 1844089 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1124 09:47:03.761790 1844089 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1124 09:47:03.761797 1844089 command_runner.go:130] > # cni_default_network = ""
	I1124 09:47:03.761810 1844089 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1124 09:47:03.761814 1844089 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1124 09:47:03.761820 1844089 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1124 09:47:03.761839 1844089 command_runner.go:130] > # plugin_dirs = [
	I1124 09:47:03.762075 1844089 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1124 09:47:03.762088 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762092 1844089 command_runner.go:130] > # List of included pod metrics.
	I1124 09:47:03.762097 1844089 command_runner.go:130] > # included_pod_metrics = [
	I1124 09:47:03.762100 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762106 1844089 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1124 09:47:03.762124 1844089 command_runner.go:130] > [crio.metrics]
	I1124 09:47:03.762136 1844089 command_runner.go:130] > # Globally enable or disable metrics support.
	I1124 09:47:03.762321 1844089 command_runner.go:130] > # enable_metrics = false
	I1124 09:47:03.762336 1844089 command_runner.go:130] > # Specify enabled metrics collectors.
	I1124 09:47:03.762342 1844089 command_runner.go:130] > # Per default all metrics are enabled.
	I1124 09:47:03.762349 1844089 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1124 09:47:03.762356 1844089 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1124 09:47:03.762386 1844089 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1124 09:47:03.762392 1844089 command_runner.go:130] > # metrics_collectors = [
	I1124 09:47:03.763119 1844089 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1124 09:47:03.763143 1844089 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1124 09:47:03.763149 1844089 command_runner.go:130] > # 	"containers_oom_total",
	I1124 09:47:03.763153 1844089 command_runner.go:130] > # 	"processes_defunct",
	I1124 09:47:03.763188 1844089 command_runner.go:130] > # 	"operations_total",
	I1124 09:47:03.763201 1844089 command_runner.go:130] > # 	"operations_latency_seconds",
	I1124 09:47:03.763207 1844089 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1124 09:47:03.763212 1844089 command_runner.go:130] > # 	"operations_errors_total",
	I1124 09:47:03.763216 1844089 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1124 09:47:03.763221 1844089 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1124 09:47:03.763226 1844089 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1124 09:47:03.763237 1844089 command_runner.go:130] > # 	"image_pulls_success_total",
	I1124 09:47:03.763260 1844089 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1124 09:47:03.763265 1844089 command_runner.go:130] > # 	"containers_oom_count_total",
	I1124 09:47:03.763270 1844089 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1124 09:47:03.763282 1844089 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1124 09:47:03.763286 1844089 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1124 09:47:03.763290 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763295 1844089 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1124 09:47:03.763300 1844089 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1124 09:47:03.763305 1844089 command_runner.go:130] > # The port on which the metrics server will listen.
	I1124 09:47:03.763313 1844089 command_runner.go:130] > # metrics_port = 9090
	I1124 09:47:03.763327 1844089 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1124 09:47:03.763337 1844089 command_runner.go:130] > # metrics_socket = ""
	I1124 09:47:03.763343 1844089 command_runner.go:130] > # The certificate for the secure metrics server.
	I1124 09:47:03.763349 1844089 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1124 09:47:03.763360 1844089 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1124 09:47:03.763365 1844089 command_runner.go:130] > # certificate on any modification event.
	I1124 09:47:03.763369 1844089 command_runner.go:130] > # metrics_cert = ""
	I1124 09:47:03.763375 1844089 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1124 09:47:03.763379 1844089 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1124 09:47:03.763384 1844089 command_runner.go:130] > # metrics_key = ""
	I1124 09:47:03.763415 1844089 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1124 09:47:03.763426 1844089 command_runner.go:130] > [crio.tracing]
	I1124 09:47:03.763442 1844089 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1124 09:47:03.763451 1844089 command_runner.go:130] > # enable_tracing = false
	I1124 09:47:03.763456 1844089 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1124 09:47:03.763461 1844089 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1124 09:47:03.763468 1844089 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1124 09:47:03.763476 1844089 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1124 09:47:03.763481 1844089 command_runner.go:130] > # CRI-O NRI configuration.
	I1124 09:47:03.763500 1844089 command_runner.go:130] > [crio.nri]
	I1124 09:47:03.763505 1844089 command_runner.go:130] > # Globally enable or disable NRI.
	I1124 09:47:03.763508 1844089 command_runner.go:130] > # enable_nri = true
	I1124 09:47:03.763524 1844089 command_runner.go:130] > # NRI socket to listen on.
	I1124 09:47:03.763535 1844089 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1124 09:47:03.763540 1844089 command_runner.go:130] > # NRI plugin directory to use.
	I1124 09:47:03.763544 1844089 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1124 09:47:03.763552 1844089 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1124 09:47:03.763560 1844089 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1124 09:47:03.763566 1844089 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1124 09:47:03.763634 1844089 command_runner.go:130] > # nri_disable_connections = false
	I1124 09:47:03.763648 1844089 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1124 09:47:03.763654 1844089 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1124 09:47:03.763669 1844089 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1124 09:47:03.763681 1844089 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1124 09:47:03.763685 1844089 command_runner.go:130] > # NRI default validator configuration.
	I1124 09:47:03.763692 1844089 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1124 09:47:03.763699 1844089 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1124 09:47:03.763703 1844089 command_runner.go:130] > # can be restricted/rejected:
	I1124 09:47:03.763707 1844089 command_runner.go:130] > # - OCI hook injection
	I1124 09:47:03.763719 1844089 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1124 09:47:03.763724 1844089 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1124 09:47:03.763730 1844089 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1124 09:47:03.763748 1844089 command_runner.go:130] > # - adjustment of linux namespaces
	I1124 09:47:03.763770 1844089 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1124 09:47:03.763778 1844089 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1124 09:47:03.763789 1844089 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1124 09:47:03.763792 1844089 command_runner.go:130] > #
	I1124 09:47:03.763797 1844089 command_runner.go:130] > # [crio.nri.default_validator]
	I1124 09:47:03.763802 1844089 command_runner.go:130] > # nri_enable_default_validator = false
	I1124 09:47:03.763807 1844089 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1124 09:47:03.763813 1844089 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1124 09:47:03.763843 1844089 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1124 09:47:03.763859 1844089 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1124 09:47:03.763864 1844089 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1124 09:47:03.763875 1844089 command_runner.go:130] > # nri_validator_required_plugins = [
	I1124 09:47:03.763879 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763885 1844089 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1124 09:47:03.763897 1844089 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1124 09:47:03.763900 1844089 command_runner.go:130] > [crio.stats]
	I1124 09:47:03.763906 1844089 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1124 09:47:03.763912 1844089 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1124 09:47:03.763930 1844089 command_runner.go:130] > # stats_collection_period = 0
	I1124 09:47:03.763938 1844089 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1124 09:47:03.763955 1844089 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1124 09:47:03.763966 1844089 command_runner.go:130] > # collection_period = 0
	I1124 09:47:03.765749 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69660512Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1124 09:47:03.765775 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696644858Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1124 09:47:03.765802 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696680353Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1124 09:47:03.765817 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696705773Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1124 09:47:03.765831 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696792248Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:03.765844 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69715048Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1124 09:47:03.765855 1844089 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1124 09:47:03.766230 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:47:03.766250 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:47:03.766285 1844089 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:03.766313 1844089 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:03.766550 1844089 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:03.766656 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:47:03.773791 1844089 command_runner.go:130] > kubeadm
	I1124 09:47:03.773812 1844089 command_runner.go:130] > kubectl
	I1124 09:47:03.773818 1844089 command_runner.go:130] > kubelet
	I1124 09:47:03.774893 1844089 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:03.774995 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:03.782726 1844089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:47:03.796280 1844089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:47:03.809559 1844089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:47:03.822485 1844089 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:03.826210 1844089 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1124 09:47:03.826334 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:03.934288 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:04.458773 1844089 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:47:04.458800 1844089 certs.go:195] generating shared ca certs ...
	I1124 09:47:04.458824 1844089 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:04.458988 1844089 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:47:04.459071 1844089 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:47:04.459080 1844089 certs.go:257] generating profile certs ...
	I1124 09:47:04.459195 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:47:04.459263 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:47:04.459319 1844089 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:47:04.459333 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1124 09:47:04.459352 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1124 09:47:04.459364 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1124 09:47:04.459374 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1124 09:47:04.459384 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1124 09:47:04.459403 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1124 09:47:04.459415 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1124 09:47:04.459426 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1124 09:47:04.459482 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:47:04.459525 1844089 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:04.459534 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:47:04.459574 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:04.459609 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:04.459638 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:04.459701 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:04.459738 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.459752 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem -> /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.459763 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.460411 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:04.483964 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:04.505086 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:04.526066 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:04.552811 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:47:04.572010 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:47:04.590830 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:04.609063 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:47:04.627178 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:04.645228 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:47:04.662875 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:47:04.680934 1844089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:04.694072 1844089 ssh_runner.go:195] Run: openssl version
	I1124 09:47:04.700410 1844089 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1124 09:47:04.700488 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:47:04.708800 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712351 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712441 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712518 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.755374 1844089 command_runner.go:130] > 3ec20f2e
	I1124 09:47:04.755866 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:04.763956 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:04.772579 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776497 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776523 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776574 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.817126 1844089 command_runner.go:130] > b5213941
	I1124 09:47:04.817555 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:04.825631 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:47:04.834323 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838391 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838437 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838503 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.879479 1844089 command_runner.go:130] > 51391683
	I1124 09:47:04.879964 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:04.888201 1844089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892298 1844089 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892323 1844089 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1124 09:47:04.892330 1844089 command_runner.go:130] > Device: 259,1	Inode: 1049847     Links: 1
	I1124 09:47:04.892337 1844089 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:04.892344 1844089 command_runner.go:130] > Access: 2025-11-24 09:42:55.781942492 +0000
	I1124 09:47:04.892349 1844089 command_runner.go:130] > Modify: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892354 1844089 command_runner.go:130] > Change: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892360 1844089 command_runner.go:130] >  Birth: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892420 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:47:04.935687 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.935791 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:47:04.977560 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.978011 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:47:05.021496 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.021984 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:47:05.064844 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.065359 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:47:05.108127 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.108275 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:47:05.149417 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.149874 1844089 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:05.149970 1844089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:05.150065 1844089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:05.178967 1844089 cri.go:89] found id: ""
	I1124 09:47:05.179068 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:05.186015 1844089 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1124 09:47:05.186039 1844089 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1124 09:47:05.186047 1844089 command_runner.go:130] > /var/lib/minikube/etcd:
	I1124 09:47:05.187003 1844089 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:47:05.187020 1844089 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:47:05.187103 1844089 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:47:05.195380 1844089 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:47:05.195777 1844089 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-373432" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.195884 1844089 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-1804834/kubeconfig needs updating (will repair): [kubeconfig missing "functional-373432" cluster setting kubeconfig missing "functional-373432" context setting]
	I1124 09:47:05.196176 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.196576 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.196729 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.197389 1844089 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 09:47:05.197410 1844089 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 09:47:05.197417 1844089 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 09:47:05.197421 1844089 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 09:47:05.197425 1844089 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 09:47:05.197478 1844089 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1124 09:47:05.197834 1844089 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:47:05.206841 1844089 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1124 09:47:05.206877 1844089 kubeadm.go:602] duration metric: took 19.851198ms to restartPrimaryControlPlane
	I1124 09:47:05.206901 1844089 kubeadm.go:403] duration metric: took 57.044926ms to StartCluster
	I1124 09:47:05.206915 1844089 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.206989 1844089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.207632 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.208100 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:05.207869 1844089 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:05.208216 1844089 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:05.208554 1844089 addons.go:70] Setting storage-provisioner=true in profile "functional-373432"
	I1124 09:47:05.208570 1844089 addons.go:239] Setting addon storage-provisioner=true in "functional-373432"
	I1124 09:47:05.208595 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.208650 1844089 addons.go:70] Setting default-storageclass=true in profile "functional-373432"
	I1124 09:47:05.208696 1844089 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-373432"
	I1124 09:47:05.208964 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.209057 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.215438 1844089 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:05.218563 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:05.247382 1844089 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:05.249311 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.249495 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.249781 1844089 addons.go:239] Setting addon default-storageclass=true in "functional-373432"
	I1124 09:47:05.249815 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.250242 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.250436 1844089 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.250452 1844089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:05.250491 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.282635 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.300501 1844089 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:05.300528 1844089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:05.300592 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.336568 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.425988 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:05.454084 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.488439 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.208671 1844089 node_ready.go:35] waiting up to 6m0s for node "functional-373432" to be "Ready" ...
	I1124 09:47:06.208714 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208746 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208771 1844089 retry.go:31] will retry after 239.578894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.208823 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208836 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208841 1844089 retry.go:31] will retry after 363.194189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208887 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.209209 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.448577 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:06.513317 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.513406 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.513430 1844089 retry.go:31] will retry after 455.413395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.572567 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.636310 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.636351 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.636371 1844089 retry.go:31] will retry after 493.81878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.709791 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.969606 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.043721 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.043767 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.043786 1844089 retry.go:31] will retry after 737.997673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.130919 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.189702 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.189740 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.189777 1844089 retry.go:31] will retry after 362.835066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.209918 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.209989 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.210325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.552843 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.609433 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.612888 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.612921 1844089 retry.go:31] will retry after 813.541227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.709061 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.709150 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.709464 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.782677 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.840776 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.844096 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.844127 1844089 retry.go:31] will retry after 1.225797654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.209825 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.209923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.210302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:08.210357 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:08.426707 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:08.489610 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:08.489648 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.489666 1844089 retry.go:31] will retry after 1.230621023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.709036 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.709146 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.709492 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.070184 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:09.132816 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.132856 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.132877 1844089 retry.go:31] will retry after 1.628151176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.209565 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.709579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.709673 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.721235 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:09.779532 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.779572 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.779591 1844089 retry.go:31] will retry after 1.535326746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.208957 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:10.709858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.709945 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.710278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:10.710329 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:10.761451 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:10.821517 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:10.825161 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.825191 1844089 retry.go:31] will retry after 2.22755575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.209753 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.209827 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.210169 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:11.315630 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:11.371370 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:11.375223 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.375258 1844089 retry.go:31] will retry after 3.052255935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.709710 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.709783 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.710113 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.208839 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.208935 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.209276 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.709439 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:13.052884 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:13.107513 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:13.110665 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.110696 1844089 retry.go:31] will retry after 2.047132712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.208986 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:13.209499 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:13.708863 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.708946 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.709225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.428018 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:14.497830 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:14.500554 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.500586 1844089 retry.go:31] will retry after 5.866686171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.708931 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.158123 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.208926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.209197 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.236504 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:15.240097 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.240134 1844089 retry.go:31] will retry after 4.86514919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.710246 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:15.710298 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:16.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.209082 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:16.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.709395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.209050 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.708987 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:18.208849 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.208918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.209189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:18.209229 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:18.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.708962 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.709278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.709232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:20.105978 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:20.163220 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.166411 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.166455 1844089 retry.go:31] will retry after 7.973407294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.209623 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.209700 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.210040 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:20.210093 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:20.367494 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:20.426176 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.426221 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.426244 1844089 retry.go:31] will retry after 7.002953248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.709786 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.710109 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.208846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.208922 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.709365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.209249 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.209597 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.709231 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.709348 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.709682 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:22.709735 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:23.209559 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.209633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.209953 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:23.709725 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.710141 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.209255 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.708973 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.709052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:25.209389 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.209467 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.209841 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:25.209903 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:25.709642 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.709719 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.209709 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.210119 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.709913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.709992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.710307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.208828 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.208902 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.209226 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.429779 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:27.489021 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:27.489061 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.489078 1844089 retry.go:31] will retry after 11.455669174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.709620 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.709697 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.710061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:27.710112 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:28.140690 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:28.207909 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:28.207963 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.207981 1844089 retry.go:31] will retry after 7.295318191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.209358 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:28.709045 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.709130 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.709479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.209267 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.209347 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.209673 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.709959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.710312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:29.710375 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:30.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.209713 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.210010 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:30.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.208899 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.709282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:32.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.209035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.209376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:32.209432 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:32.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.709024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.208858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.208927 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.209204 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.709003 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:34.208983 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.209403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:34.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:34.709379 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.709553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.709927 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.209738 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.209811 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.210108 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.503497 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:35.564590 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:35.564633 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.564653 1844089 retry.go:31] will retry after 18.757863028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.709881 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.709958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.710297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.208842 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.208909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.209196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.708965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.709288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:36.709337 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:37.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.209034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:37.708926 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.708999 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:38.709418 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:38.945958 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:39.002116 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:39.006563 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.006598 1844089 retry.go:31] will retry after 17.731618054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.209830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.210101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:39.708971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.709049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.209137 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.709279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.709607 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:40.709669 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:41.209237 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:41.709465 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.709538 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.709862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.209660 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.209740 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.210065 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.709826 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.710247 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:42.710300 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:43.208851 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.208929 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.209238 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:43.708832 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.708904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.709198 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.209292 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.709200 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.709637 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:45.209579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.209674 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.210095 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:45.210174 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:45.708846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.708926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.709257 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.709348 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.208969 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:47.709460 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:48.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:48.708913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.708985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.709311 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.209041 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.709341 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.709413 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:49.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:50.209504 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.209579 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.209916 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:50.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.709795 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.209819 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.210144 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.708840 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.708913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.709251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:52.208995 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.209079 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.209450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:52.209504 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:52.709193 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.709263 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.709579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.209019 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.709514 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.208914 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.208983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.323627 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:54.379391 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:54.382809 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.382842 1844089 retry.go:31] will retry after 21.097681162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.709482 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.709561 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.709905 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:54.709960 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:55.209834 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.209915 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.210225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:55.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.708984 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.709297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.209078 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.709184 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.709266 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.709603 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.738841 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:56.794457 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:56.797830 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:56.797870 1844089 retry.go:31] will retry after 32.033139138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:57.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.209553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.209864 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:57.209918 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:57.709718 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.709790 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.710100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.209898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.209970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.210337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.709037 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.709135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.209241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.209573 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.709578 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.709657 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.710027 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:59.710084 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:00.211215 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.211305 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.211621 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:00.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.208998 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.209081 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.708891 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.708967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:02.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.209136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.209526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:02.209599 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:02.709293 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.709375 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.709754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.209529 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.209595 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.209866 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.709708 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.709780 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.710093 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:04.209893 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.209965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.210332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:04.210385 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:04.709021 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.709445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.209464 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.209551 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.209872 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.709670 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.709745 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.710155 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.209763 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.209847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.708847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.708923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.709285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:06.709340 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:07.208931 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:07.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.709326 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.709122 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.709201 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.709539 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:08.709592 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:09.209218 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.209284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.209536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:09.709509 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.709587 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.709963 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.209602 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.209679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.209999 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.709702 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.709772 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.710032 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:10.710072 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:11.209870 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.209951 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.210285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:11.708984 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.208994 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:13.209062 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.209163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:13.209567 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:13.709210 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.709665 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.209027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.708929 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:15.209506 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.209583 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.209851 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:15.209900 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:15.481440 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:15.543475 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:15.543517 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.543536 1844089 retry.go:31] will retry after 17.984212056s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.709841 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.709917 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.710203 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.208972 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.209053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.209359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.708920 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.709254 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.209025 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.209445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.709181 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.709254 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.709571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:17.709636 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:18.209204 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.209276 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:18.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.209167 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.209240 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.709543 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.709616 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.709867 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:19.709908 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:20.209743 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.209813 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.210142 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:20.708844 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.708918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.709248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.709064 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:22.209022 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.209096 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.209401 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:22.209447 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:22.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.209381 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.709077 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.709165 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.709527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:24.209256 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.209332 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:24.209710 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:24.709523 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.709594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.709919 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.209714 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.209794 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.210176 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.709866 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.709934 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.710232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.709174 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.709252 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.709562 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:26.709621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:27.209207 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.209330 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.209681 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:27.709493 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.709901 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.209534 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.209607 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.209945 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.709691 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:28.710042 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:28.831261 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:48:28.892751 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892791 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892882 1844089 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:29.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:29.709415 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.709488 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.709832 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.209666 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.209735 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.209996 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.709837 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.709912 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.710250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:30.710310 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:31.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.209451 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:31.708995 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.709068 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.209127 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.209200 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.209540 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.709251 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.709359 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.709688 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:33.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:33.209573 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:33.528038 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:33.587216 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587268 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587355 1844089 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:33.590586 1844089 out.go:179] * Enabled addons: 
	I1124 09:48:33.594109 1844089 addons.go:530] duration metric: took 1m28.385890989s for enable addons: enabled=[]
	I1124 09:48:33.709504 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.709580 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.709909 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.209684 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.209763 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.210103 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:35.209792 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.209867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.210196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:35.210254 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:35.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.209290 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.708942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.209089 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.209182 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:37.709398 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:38.208956 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.209049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.209393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:38.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.209063 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.209144 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.209398 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.709762 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:39.709826 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:40.209362 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.209445 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.209801 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:40.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.709695 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.710016 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.209808 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.209911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.210242 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.708947 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.709450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:42.209333 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.209441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.209737 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:42.209782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:42.709513 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.709593 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.709913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.209705 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.209787 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.210136 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.709811 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.709882 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.710135 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.208840 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.208916 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.709434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:44.709491 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:45.209557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.210004 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:45.709853 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.709947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.710263 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.708971 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:47.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:47.209423 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:47.708928 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.709090 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.709181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.709512 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:49.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:49.209487 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:49.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.209043 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.208831 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.209321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.708959 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:51.709417 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:52.209136 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.209591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:52.709205 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.709536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.209062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.709175 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.709255 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.709599 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:53.709661 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:54.209206 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.209288 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:54.709557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.709679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.709998 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.209740 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.210158 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.708864 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:56.208988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.209080 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:56.209502 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:56.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.709658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.209431 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.209503 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.209825 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.709393 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.709781 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:58.209591 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.209670 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.210036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:58.210095 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:58.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.709861 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.208919 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.709435 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.709520 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.709836 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:00.209722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.210110 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:00.210156 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:00.709882 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.709966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.710301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.208997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.709044 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.709069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:02.709406 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:03.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.209309 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:03.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.709027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.709334 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.208982 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.209059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.709678 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:04.709782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:05.209548 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.209645 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.209977 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:05.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.710166 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.208981 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.209051 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.209332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.709332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:07.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.209086 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.209494 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:07.209563 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:07.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.709391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.209399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.709011 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.709085 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.209052 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.209488 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.709362 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.709442 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.709796 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:09.709855 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:10.209613 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.209690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:10.709735 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.709803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.209881 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.209958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.210304 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.709359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:12.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:12.209396 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:12.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.709325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.209056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.209385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.709008 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.709380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:14.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.209238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.209577 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:14.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:14.709397 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.709478 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.709814 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.209760 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.210102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.709949 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.710282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.209016 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.709074 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.709163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.709419 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:16.709459 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:17.209141 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.209215 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:17.709286 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.709666 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.209424 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.209499 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.209754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.709505 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.709585 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.709897 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:18.709953 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:19.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.209779 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.210117 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:19.709834 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:21.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:21.209415 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:21.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.709029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.209126 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.209204 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.209575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.709550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:23.209231 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.209670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:23.209763 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:23.709555 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.709633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.709995 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.209767 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.209841 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.709526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:25.209328 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.209411 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.209756 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:25.209816 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:25.709508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.709600 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.709938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.209774 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.209856 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.210202 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.709369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:27.209746 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.210131 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:27.210184 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:27.708830 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.708905 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.208880 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.209307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.709007 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.709327 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.209020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.709345 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.709441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.709777 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:29.709838 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:30.209612 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.209687 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.209958 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:30.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.709798 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.710129 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.208884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.209299 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.708974 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:32.208916 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.208993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:32.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:32.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.208919 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.208994 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.209330 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.709056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.709413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:34.209151 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.209227 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:34.209646 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:34.709436 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.709506 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.709774 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.209725 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.209803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.210160 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.708884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.708977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.208912 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.209323 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.709458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:36.709524 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:37.209047 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.209151 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:37.709220 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.709324 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.709631 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.209508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.209592 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.209964 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.709785 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.709869 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.710199 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:38.710257 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:39.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.208884 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.209168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:39.709057 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.709156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.709501 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.209097 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.209195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.709222 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.709295 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.709630 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:41.209317 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.209397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.209747 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:41.209802 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:41.709569 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.709654 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.709993 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.209817 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.209904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.210200 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.209070 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.709575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:43.709620 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:44.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:44.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.709401 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.709783 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.209860 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.209959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.210271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.708945 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:46.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.209515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:46.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:46.709202 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.709268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.209384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.709402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.209161 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.209414 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.709091 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.709194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.709569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:48.709627 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:49.209307 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.209384 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.209719 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:49.709527 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.709599 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.709865 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.209620 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.209699 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.210039 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.709717 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.709799 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:50.710183 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:51.208825 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.208894 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.209172 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:51.708925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.709349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.708893 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:53.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.209349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:53.209399 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:53.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.208920 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.209318 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.709373 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.709458 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.709760 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:55.209592 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.209978 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:55.210040 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:55.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.710161 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.208943 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.209271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.708876 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.708959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.208866 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.209285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.708997 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.709427 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:57.709482 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:58.209166 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.209246 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.209658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:58.709454 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.709524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.709780 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.209521 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.209598 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.209934 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.709770 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.709854 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.710168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:59.710230 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:00.208926 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.210913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:50:00.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.709842 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.710201 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.209315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.709093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:02.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.209057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:02.209542 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:02.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.709389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.209380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.708939 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.709357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.208970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.209268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.709182 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.709269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.709623 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:04.709678 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:05.209442 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.209524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.209862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:05.709612 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.709690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.710022 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.209806 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.209880 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.210219 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:07.209084 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.209187 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.209448 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:07.209497 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:07.709139 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.709341 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.209829 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.209903 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.708897 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.708964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.709236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.208927 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.209002 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.209378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.708935 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:09.709424 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:10.208903 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.209331 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:10.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.709423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.209530 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.709132 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.709202 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:11.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:12.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:12.709068 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.709177 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.709636 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.209220 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.209299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.209571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:14.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.209025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:14.209433 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:14.708909 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.708988 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.709306 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.209826 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.210152 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.708902 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.208905 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.208978 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.209278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.708874 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.709267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:16.709311 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:17.208877 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.209356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:17.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.209413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.709157 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.709238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:18.709645 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:19.209201 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.209269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.209518 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:19.709485 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.709558 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.709880 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.209636 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.209974 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.709755 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.709829 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.710090 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:20.710130 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:21.209835 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.209913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:21.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.709338 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.208900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.208981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.709058 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:23.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.209616 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:23.209677 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:23.709211 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.708953 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.209203 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.209580 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.709392 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.709705 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:25.709765 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:26.209510 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.209594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.209928 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:26.709733 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.209837 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.209926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.210235 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.709350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:28.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.209251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:28.209296 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:28.709016 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.709092 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.709432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.709708 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:30.209514 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.209603 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.209930 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:30.209989 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:30.709705 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.709782 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.710096 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.209823 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.210153 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.709337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.209065 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.209484 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:32.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:33.209221 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.209638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:33.709229 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.709309 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.709638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.209279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.209527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.709451 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.709526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.709824 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:34.709870 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:35.209712 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.210156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:35.709774 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.710101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.208924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.209266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.708961 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.709411 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:37.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.208992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.209261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:37.209303 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:37.708946 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.209345 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.709003 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.709091 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.709404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:39.209187 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.209613 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:39.209672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:39.709433 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.709508 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.709838 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.209598 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.209675 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.709773 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.709855 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.710189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.208908 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:41.709318 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:42.209001 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.209093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:42.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.709286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.709587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.209235 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.209303 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.709313 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.709652 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:43.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:44.209469 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.209542 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.209879 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:44.709684 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.709755 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.710023 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.208845 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.208942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.709723 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.709804 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.710156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:45.710211 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:46.208872 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.208948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.209249 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:46.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.208964 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.709147 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.709424 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:48.209095 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.209192 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:48.209580 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:48.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.709378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.209077 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.709414 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.709491 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.709816 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:50.209655 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.209742 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.210066 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:50.210123 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:50.709861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.709937 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.710188 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.208878 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.208952 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.209322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.708914 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.708993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.208985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.709023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.709362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:52.709420 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:53.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.209038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.209404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:53.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.708997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.709294 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.709054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.709449 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:54.709516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:55.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.209634 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.209938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:55.709754 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.709830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.710148 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.208939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:57.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.209386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:57.209445 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:57.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.709399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.209082 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.209168 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.209479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.709393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.208975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.209052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.708894 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.709244 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:59.709289 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:00.209000 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.209097 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:00.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.209068 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.209486 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:01.709394 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:02.208943 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.209065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:02.708872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.708947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.709229 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:03.208953 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.210127 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:51:03.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.708939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.709302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:04.209006 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.209406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:04.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:04.709392 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.709474 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.709835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.209403 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.209479 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.209835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.709680 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.709766 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.710028 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:06.209869 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.209955 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.210295 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:06.210355 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:06.708964 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.709046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.709408 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.708938 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.209150 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.209225 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.709219 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.709289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.709627 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:08.709719 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:09.209522 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.209981 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:09.709768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.709843 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.208987 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:11.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.209070 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:11.209465 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:11.708886 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.708972 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.709239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.208935 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.708976 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.208896 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:13.709391 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:14.208984 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:14.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.709615 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.209618 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.209698 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.210033 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.709832 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.709911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.710236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:15.710293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:16.208918 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:16.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.709009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.709328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.209046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.209357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.708924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.709185 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:18.208887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.208963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.209319 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:18.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:18.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.709038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.208917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.709241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.709591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:20.209330 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.209415 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:20.209872 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:20.709638 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.709728 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.209872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.209964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.210347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.709162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.709523 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.209023 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.209095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.209382 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.709055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:22.709477 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:23.209195 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.209283 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:23.709228 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.709557 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.709350 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.709431 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.709744 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:24.709799 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:25.209704 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.210041 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:25.709815 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.709891 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.209912 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.209990 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.210312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.708887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.708968 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.709274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:27.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:27.209444 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:27.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.209045 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.209134 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:29.209164 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:29.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:29.709432 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.709510 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.709795 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.209546 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.209973 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.709624 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.709702 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.710036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:31.209789 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.209866 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.210145 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:31.210192 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:31.708857 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.709271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.208962 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.209409 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.708881 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.708953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.709262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.709012 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:33.709407 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:34.209051 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.209156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:34.709486 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.709969 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.209768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.209850 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.210220 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.709035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:36.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.209372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:36.209421 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:36.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.709519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.208947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.209239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.709416 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:38.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.209257 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:38.209647 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:38.709163 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.709230 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.208929 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.209009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.209347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.709324 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.709397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.709728 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:40.209489 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.209557 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.209830 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:40.209876 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:40.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.709707 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.710055 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.209685 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.209762 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.210061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.709752 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.709828 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.710112 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.709372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:42.709426 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:43.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:43.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.709026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.209194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.209587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.709472 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.709546 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.709820 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:44.709861 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:45.209849 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.209939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.210268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:45.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.709006 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.709307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.209264 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.709403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:47.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.209219 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.209569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:47.209622 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:47.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.709312 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.709563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.208991 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.709134 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.709207 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.709500 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.209300 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:49.709409 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:50.209121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.209206 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:50.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.709261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.209021 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.209129 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.209441 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.709189 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.709265 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.709596 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:51.709649 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:52.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.209290 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.209551 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:52.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.209080 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.209550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.708975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.709317 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.209337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:54.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:54.708980 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.209380 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.209452 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.209779 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.709062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.709456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:56.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.209063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:56.209437 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:56.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.709867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.209908 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.209985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.210307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:58.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.209185 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:58.209485 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:58.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.209363 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.708916 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.709322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.209018 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.709370 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.709700 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:00.709758 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:01.209455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.209526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.209787 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:01.709649 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.709729 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.209841 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.209925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.210265 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.709293 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:03.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.209407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:03.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:03.709146 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.709228 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.709570 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.209286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.209544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.709581 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.709667 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.710009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.208954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.708991 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.709066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:05.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:06.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:06.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.709218 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.709559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.209217 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.209291 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.209612 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.709400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:08.209119 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.209197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:08.209621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:08.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.709292 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:10.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:11.209191 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.209268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.209610 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:11.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.709609 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.208967 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.709039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:13.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.209308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:13.209350 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:13.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.709140 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.709483 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.709549 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.709685 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.710128 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:15.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.209289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.209620 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:15.209679 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:15.709455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.709531 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.709878 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.209651 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.209725 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.209983 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.710195 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:17.709412 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:18.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.208939 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.209302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.709272 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.709356 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:19.709724 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:20.209502 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.209578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.209951 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:20.709782 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.710102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.209876 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.209953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.210310 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.708981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.709321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:22.208895 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.208966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.209252 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:22.209293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:22.709034 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.709136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.709493 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.209350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.708983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.709272 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:24.208934 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.209013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:24.209428 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:24.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.709396 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.209216 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.209546 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.709028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:26.209056 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.209172 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:26.209508 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:26.708880 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.708948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.709291 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.709023 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.709120 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.209060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.209160 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:28.709443 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:29.209162 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.209244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:29.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.709568 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.709818 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.209668 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.209750 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.210098 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.708867 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.708942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:31.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.208986 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.209328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:31.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:31.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.709072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.709157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:33.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:33.209513 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:33.709035 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.709137 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.208964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.209274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.709168 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.709244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:35.209409 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.209492 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.209807 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:35.209852 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:35.709526 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.709597 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.709869 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.209708 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.210043 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.710262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.209297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:37.709440 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:38.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.209069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:38.708972 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.709373 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.709697 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:39.709756 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:40.209475 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.209550 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.209908 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:40.709677 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.709752 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.710115 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.209759 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.210192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.708889 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.709284 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:42.208977 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:42.209516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:42.709031 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.709125 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.709477 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.209073 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.209164 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.209444 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.709305 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.709379 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.709632 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:44.709672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:45.209865 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.210034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.211000 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:45.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.709376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.209157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.209473 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.709360 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:47.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.209066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.209434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:47.209489 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:47.709149 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.709220 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.709470 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.209367 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.208882 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.208956 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.209248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.708923 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:49.709401 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:50.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.209015 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.209369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:50.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.709142 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.709429 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.209160 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.209581 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.709273 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.709351 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.709670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:51.709725 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:52.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.209549 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.209889 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:52.709740 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.709823 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.710180 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.209005 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.709060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:54.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:54.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:54.708969 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.709044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.709387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.209314 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.209382 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.209635 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.709021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:56.209074 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:56.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:56.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.709266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.709098 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.709195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.709513 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.208958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.209260 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.708954 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.709420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:58.709478 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:59.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:59.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.709259 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.709688 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.710034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:00.710088 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:01.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.209777 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.210034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:01.709784 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.709858 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.710223 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.209882 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.209960 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.210301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.708970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:03.209442 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:03.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.209135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.209531 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.709573 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.709992 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:05.209202 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.209287 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.209886 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:05.209937 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:05.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.709000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:06.209058 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:06.209140 1844089 node_ready.go:38] duration metric: took 6m0.000414768s for node "functional-373432" to be "Ready" ...
	I1124 09:53:06.212349 1844089 out.go:203] 
	W1124 09:53:06.215554 1844089 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1124 09:53:06.215587 1844089 out.go:285] * 
	W1124 09:53:06.217723 1844089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:53:06.220637 1844089 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.817550161Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7235f97c-291f-4621-a834-368f0380908f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.844023057Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=07b8cd5a-b881-4bbe-a717-727d93ea6d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.84418017Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=07b8cd5a-b881-4bbe-a717-727d93ea6d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.844235974Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=07b8cd5a-b881-4bbe-a717-727d93ea6d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.9320376Z" level=info msg="Checking image status: minikube-local-cache-test:functional-373432" id=a4ed308e-d514-45fd-a3fc-8a74361d8993 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.961735471Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-373432" id=045576c9-388a-412c-91fa-df1dc91ddbf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.961910226Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-373432 not found" id=045576c9-388a-412c-91fa-df1dc91ddbf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.961964167Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-373432 found" id=045576c9-388a-412c-91fa-df1dc91ddbf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.987746256Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-373432" id=9d1a18c1-6b22-4a5b-8a9e-2ba1a06e3d01 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.987899177Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-373432 not found" id=9d1a18c1-6b22-4a5b-8a9e-2ba1a06e3d01 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.987949606Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-373432 found" id=9d1a18c1-6b22-4a5b-8a9e-2ba1a06e3d01 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:16 functional-373432 crio[6244]: time="2025-11-24T09:53:16.791685967Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d99223e9-26ed-44d8-ad07-c98ffe6b880a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.129751526Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=96077913-9966-483f-9e4d-73605b805e23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.129892295Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=96077913-9966-483f-9e4d-73605b805e23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.129931442Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=96077913-9966-483f-9e4d-73605b805e23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.804659087Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=15984ffe-bca9-48a6-a98a-61eb97b3a11a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.804783511Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=15984ffe-bca9-48a6-a98a-61eb97b3a11a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.804819852Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=15984ffe-bca9-48a6-a98a-61eb97b3a11a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.836659051Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=674e2ead-5287-4d08-8e7b-3cedc6986504 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.836782417Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=674e2ead-5287-4d08-8e7b-3cedc6986504 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.836818766Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=674e2ead-5287-4d08-8e7b-3cedc6986504 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.862705529Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f9b45855-6480-4613-bf87-7678688fe267 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.862838092Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f9b45855-6480-4613-bf87-7678688fe267 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.862874335Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f9b45855-6480-4613-bf87-7678688fe267 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:18 functional-373432 crio[6244]: time="2025-11-24T09:53:18.393780554Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=39982433-4391-44e8-bf72-9d67ba5887f9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:53:19.923710   10194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:19.924428   10194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:19.926242   10194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:19.926878   10194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:19.928456   10194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:53:19 up  8:35,  0 user,  load average: 0.52, 0.28, 0.56
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 09:53:17 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:17 functional-373432 kubelet[9996]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:17 functional-373432 kubelet[9996]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:17 functional-373432 kubelet[9996]: E1124 09:53:17.779878    9996 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:17 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:17 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:18 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Nov 24 09:53:18 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:18 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:18 functional-373432 kubelet[10086]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:18 functional-373432 kubelet[10086]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:18 functional-373432 kubelet[10086]: E1124 09:53:18.464317   10086 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:18 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:18 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:19 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Nov 24 09:53:19 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:19 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:19 functional-373432 kubelet[10107]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:19 functional-373432 kubelet[10107]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:19 functional-373432 kubelet[10107]: E1124 09:53:19.276953   10107 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:19 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:19 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:19 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Nov 24 09:53:19 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:19 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (391.762095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-373432 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-373432 get pods: exit status 1 (104.554251ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-373432 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (293.649427ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 logs -n 25: (1.073450625s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr                                            │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls                                                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ delete         │ -p functional-498341                                                                                                                              │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │ 24 Nov 25 09:38 UTC │
	│ start          │ -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │                     │
	│ start          │ -p functional-373432 --alsologtostderr -v=8                                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:latest                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add minikube-local-cache-test:functional-373432                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache delete minikube-local-cache-test:functional-373432                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl images                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ cache          │ functional-373432 cache reload                                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ kubectl        │ functional-373432 kubectl -- --context functional-373432 get pods                                                                                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:46:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:46:59.387016 1844089 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:46:59.387211 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387243 1844089 out.go:374] Setting ErrFile to fd 2...
	I1124 09:46:59.387263 1844089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:46:59.387557 1844089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:46:59.388008 1844089 out.go:368] Setting JSON to false
	I1124 09:46:59.388882 1844089 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30570,"bootTime":1763947050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:46:59.388979 1844089 start.go:143] virtualization:  
	I1124 09:46:59.392592 1844089 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:46:59.396303 1844089 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:46:59.396370 1844089 notify.go:221] Checking for updates...
	I1124 09:46:59.402093 1844089 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:46:59.405033 1844089 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:46:59.407908 1844089 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:46:59.411405 1844089 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:46:59.414441 1844089 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:46:59.417923 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:46:59.418109 1844089 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:46:59.451337 1844089 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:46:59.451452 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.507906 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.498692309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.508018 1844089 docker.go:319] overlay module found
	I1124 09:46:59.511186 1844089 out.go:179] * Using the docker driver based on existing profile
	I1124 09:46:59.514098 1844089 start.go:309] selected driver: docker
	I1124 09:46:59.514123 1844089 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.514235 1844089 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:46:59.514350 1844089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:46:59.569823 1844089 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:46:59.559648119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:46:59.570237 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:46:59.570306 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:46:59.570363 1844089 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:46:59.573590 1844089 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:46:59.576497 1844089 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:46:59.579448 1844089 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:46:59.582547 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:46:59.582648 1844089 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:46:59.602755 1844089 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:46:59.602781 1844089 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:46:59.648405 1844089 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:46:59.826473 1844089 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:46:59.826636 1844089 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:46:59.826856 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:46:59.826893 1844089 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:46:59.826927 1844089 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:46:59.826975 1844089 start.go:364] duration metric: took 25.756µs to acquireMachinesLock for "functional-373432"
	I1124 09:46:59.826990 1844089 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:46:59.826996 1844089 fix.go:54] fixHost starting: 
	I1124 09:46:59.827258 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:46:59.843979 1844089 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:46:59.844011 1844089 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:46:59.847254 1844089 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:46:59.847299 1844089 machine.go:94] provisionDockerMachine start ...
	I1124 09:46:59.847379 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:46:59.872683 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:46:59.873034 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:46:59.873051 1844089 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:46:59.992797 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.044426 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.044454 1844089 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:47:00.044547 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.104810 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.105156 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.105170 1844089 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:47:00.386378 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:47:00.386611 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.409023 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:00.411110 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.411442 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.411467 1844089 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:47:00.595280 1844089 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595319 1844089 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595392 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:47:00.595381 1844089 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595403 1844089 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 139.325µs
	I1124 09:47:00.595412 1844089 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595423 1844089 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595434 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:47:00.595442 1844089 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 62.902µs
	I1124 09:47:00.595450 1844089 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595457 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:47:00.595463 1844089 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 41.207µs
	I1124 09:47:00.595469 1844089 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:47:00.595461 1844089 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595477 1844089 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595494 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:47:00.595500 1844089 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.394µs
	I1124 09:47:00.595507 1844089 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595510 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:47:00.595517 1844089 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595524 1844089 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.5µs
	I1124 09:47:00.595532 1844089 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:47:00.595282 1844089 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:47:00.595546 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:47:00.595552 1844089 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 36.923µs
	I1124 09:47:00.595556 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:47:00.595558 1844089 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:47:00.595562 1844089 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 302.437µs
	I1124 09:47:00.595572 1844089 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:47:00.595568 1844089 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:47:00.595581 1844089 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 263.856µs
	I1124 09:47:00.595587 1844089 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:47:00.595593 1844089 cache.go:87] Successfully saved all images to host disk.
	I1124 09:47:00.596331 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:47:00.596354 1844089 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:47:00.596379 1844089 ubuntu.go:190] setting up certificates
	I1124 09:47:00.596403 1844089 provision.go:84] configureAuth start
	I1124 09:47:00.596480 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:00.614763 1844089 provision.go:143] copyHostCerts
	I1124 09:47:00.614805 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614845 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:47:00.614865 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:47:00.614942 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:47:00.615049 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615076 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:47:00.615081 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:47:00.615111 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:47:00.615166 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615187 1844089 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:47:00.615191 1844089 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:47:00.615218 1844089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:47:00.615273 1844089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:47:00.746073 1844089 provision.go:177] copyRemoteCerts
	I1124 09:47:00.746146 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:47:00.746187 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.767050 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:00.873044 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1124 09:47:00.873153 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:47:00.891124 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1124 09:47:00.891207 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:47:00.909032 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1124 09:47:00.909209 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:47:00.927426 1844089 provision.go:87] duration metric: took 330.992349ms to configureAuth
	I1124 09:47:00.927482 1844089 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:47:00.927686 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:00.927808 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:00.945584 1844089 main.go:143] libmachine: Using SSH client type: native
	I1124 09:47:00.945906 1844089 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:47:00.945929 1844089 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:47:01.279482 1844089 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:47:01.279511 1844089 machine.go:97] duration metric: took 1.432203745s to provisionDockerMachine
	I1124 09:47:01.279522 1844089 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:47:01.279534 1844089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:47:01.279608 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:47:01.279659 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.306223 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.413310 1844089 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:47:01.416834 1844089 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1124 09:47:01.416855 1844089 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1124 09:47:01.416859 1844089 command_runner.go:130] > VERSION_ID="12"
	I1124 09:47:01.416863 1844089 command_runner.go:130] > VERSION="12 (bookworm)"
	I1124 09:47:01.416868 1844089 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1124 09:47:01.416884 1844089 command_runner.go:130] > ID=debian
	I1124 09:47:01.416889 1844089 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1124 09:47:01.416894 1844089 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1124 09:47:01.416900 1844089 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1124 09:47:01.416956 1844089 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:47:01.416971 1844089 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:47:01.416982 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:47:01.417038 1844089 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:47:01.417141 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:47:01.417149 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /etc/ssl/certs/18067042.pem
	I1124 09:47:01.417225 1844089 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:47:01.417238 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> /etc/test/nested/copy/1806704/hosts
	I1124 09:47:01.417285 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:47:01.425057 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:01.443829 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:47:01.461688 1844089 start.go:296] duration metric: took 182.151565ms for postStartSetup
	I1124 09:47:01.461806 1844089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:47:01.461866 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.478949 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.582285 1844089 command_runner.go:130] > 19%
	I1124 09:47:01.582359 1844089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:47:01.587262 1844089 command_runner.go:130] > 159G
	I1124 09:47:01.587296 1844089 fix.go:56] duration metric: took 1.760298367s for fixHost
	I1124 09:47:01.587308 1844089 start.go:83] releasing machines lock for "functional-373432", held for 1.76032423s
	I1124 09:47:01.587385 1844089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:47:01.605227 1844089 ssh_runner.go:195] Run: cat /version.json
	I1124 09:47:01.605290 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.605558 1844089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:47:01.605651 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:01.623897 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.640948 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:01.724713 1844089 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1763789673-21948", "minikube_version": "v1.37.0", "commit": "2996c7ec74d570fa8ab37e6f4f8813150d0c7473"}
	I1124 09:47:01.724863 1844089 ssh_runner.go:195] Run: systemctl --version
	I1124 09:47:01.812522 1844089 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1124 09:47:01.816014 1844089 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1124 09:47:01.816053 1844089 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1124 09:47:01.816128 1844089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:47:01.851397 1844089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1124 09:47:01.855673 1844089 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1124 09:47:01.855841 1844089 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:47:01.855908 1844089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:47:01.863705 1844089 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:47:01.863730 1844089 start.go:496] detecting cgroup driver to use...
	I1124 09:47:01.863762 1844089 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:47:01.863809 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:47:01.879426 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:47:01.892902 1844089 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:47:01.892974 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:47:01.908995 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:47:01.922294 1844089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:47:02.052541 1844089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:47:02.189051 1844089 docker.go:234] disabling docker service ...
	I1124 09:47:02.189218 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:47:02.205065 1844089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:47:02.219126 1844089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:47:02.329712 1844089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:47:02.449311 1844089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:47:02.462019 1844089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:47:02.474641 1844089 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1124 09:47:02.476035 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:02.633334 1844089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:47:02.633408 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.642946 1844089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:47:02.643028 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.652272 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.661578 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.670499 1844089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:47:02.678769 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.688087 1844089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.696980 1844089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:02.705967 1844089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:47:02.713426 1844089 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1124 09:47:02.713510 1844089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:47:02.720989 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:02.841969 1844089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:47:03.036830 1844089 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:47:03.036905 1844089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:47:03.040587 1844089 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1124 09:47:03.040611 1844089 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1124 09:47:03.040618 1844089 command_runner.go:130] > Device: 0,72	Inode: 1805        Links: 1
	I1124 09:47:03.040633 1844089 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:03.040639 1844089 command_runner.go:130] > Access: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040645 1844089 command_runner.go:130] > Modify: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040654 1844089 command_runner.go:130] > Change: 2025-11-24 09:47:02.973077995 +0000
	I1124 09:47:03.040658 1844089 command_runner.go:130] >  Birth: -
	I1124 09:47:03.041299 1844089 start.go:564] Will wait 60s for crictl version
	I1124 09:47:03.041375 1844089 ssh_runner.go:195] Run: which crictl
	I1124 09:47:03.044736 1844089 command_runner.go:130] > /usr/local/bin/crictl
	I1124 09:47:03.045405 1844089 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:47:03.072144 1844089 command_runner.go:130] > Version:  0.1.0
	I1124 09:47:03.072339 1844089 command_runner.go:130] > RuntimeName:  cri-o
	I1124 09:47:03.072489 1844089 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1124 09:47:03.072634 1844089 command_runner.go:130] > RuntimeApiVersion:  v1
	I1124 09:47:03.075078 1844089 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:47:03.075181 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.102664 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.102689 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.102697 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.102702 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.102708 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.102713 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.102717 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.102722 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.102726 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.102730 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.102734 1844089 command_runner.go:130] >      static
	I1124 09:47:03.102737 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.102741 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.102745 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.102753 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.102757 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.102763 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.102768 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.102772 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.102781 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.104732 1844089 ssh_runner.go:195] Run: crio --version
	I1124 09:47:03.133953 1844089 command_runner.go:130] > crio version 1.34.2
	I1124 09:47:03.133980 1844089 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1124 09:47:03.133987 1844089 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1124 09:47:03.133991 1844089 command_runner.go:130] >    GitTreeState:   dirty
	I1124 09:47:03.133996 1844089 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1124 09:47:03.134000 1844089 command_runner.go:130] >    GoVersion:      go1.24.6
	I1124 09:47:03.134004 1844089 command_runner.go:130] >    Compiler:       gc
	I1124 09:47:03.134008 1844089 command_runner.go:130] >    Platform:       linux/arm64
	I1124 09:47:03.134012 1844089 command_runner.go:130] >    Linkmode:       static
	I1124 09:47:03.134016 1844089 command_runner.go:130] >    BuildTags:
	I1124 09:47:03.134019 1844089 command_runner.go:130] >      static
	I1124 09:47:03.134023 1844089 command_runner.go:130] >      netgo
	I1124 09:47:03.134027 1844089 command_runner.go:130] >      osusergo
	I1124 09:47:03.134031 1844089 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1124 09:47:03.134039 1844089 command_runner.go:130] >      seccomp
	I1124 09:47:03.134043 1844089 command_runner.go:130] >      apparmor
	I1124 09:47:03.134050 1844089 command_runner.go:130] >      selinux
	I1124 09:47:03.134056 1844089 command_runner.go:130] >    LDFlags:          unknown
	I1124 09:47:03.134060 1844089 command_runner.go:130] >    SeccompEnabled:   true
	I1124 09:47:03.134068 1844089 command_runner.go:130] >    AppArmorEnabled:  false
	I1124 09:47:03.140942 1844089 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:47:03.143873 1844089 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:47:03.160952 1844089 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:47:03.165052 1844089 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1124 09:47:03.165287 1844089 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:47:03.165490 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.325050 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.479106 1844089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:47:03.632699 1844089 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:47:03.632773 1844089 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:47:03.664623 1844089 command_runner.go:130] > {
	I1124 09:47:03.664647 1844089 command_runner.go:130] >   "images":  [
	I1124 09:47:03.664652 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664661 1844089 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1124 09:47:03.664666 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664683 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1124 09:47:03.664695 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664705 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664715 1844089 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1124 09:47:03.664722 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664727 1844089 command_runner.go:130] >       "size":  "29035622",
	I1124 09:47:03.664734 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664738 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664746 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664750 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664760 1844089 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1124 09:47:03.664768 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664775 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1124 09:47:03.664780 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664788 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664797 1844089 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1124 09:47:03.664804 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664808 1844089 command_runner.go:130] >       "size":  "74488375",
	I1124 09:47:03.664816 1844089 command_runner.go:130] >       "username":  "nonroot",
	I1124 09:47:03.664820 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664827 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664831 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664838 1844089 command_runner.go:130] >       "id":  "1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca",
	I1124 09:47:03.664845 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664851 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.24-0"
	I1124 09:47:03.664855 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664859 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664873 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:62cae8d38d7e1187ef2841ebc55bef1c5a46f21a69675fae8351f92d3a3e9bc6"
	I1124 09:47:03.664880 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664885 1844089 command_runner.go:130] >       "size":  "63341525",
	I1124 09:47:03.664892 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.664896 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.664904 1844089 command_runner.go:130] >       },
	I1124 09:47:03.664908 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.664923 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.664929 1844089 command_runner.go:130] >     },
	I1124 09:47:03.664932 1844089 command_runner.go:130] >     {
	I1124 09:47:03.664939 1844089 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1124 09:47:03.664947 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.664951 1844089 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1124 09:47:03.664959 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664963 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.664974 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1124 09:47:03.664987 1844089 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1124 09:47:03.664994 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.664999 1844089 command_runner.go:130] >       "size":  "60857170",
	I1124 09:47:03.665002 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665009 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665013 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665016 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665020 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665024 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665028 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665039 1844089 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1124 09:47:03.665043 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665053 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1124 09:47:03.665057 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665065 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665078 1844089 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1124 09:47:03.665085 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665089 1844089 command_runner.go:130] >       "size":  "84947242",
	I1124 09:47:03.665093 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665131 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665140 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665144 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665148 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665155 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665163 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665174 1844089 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1124 09:47:03.665181 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665187 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1124 09:47:03.665195 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665198 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665206 1844089 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1124 09:47:03.665213 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665217 1844089 command_runner.go:130] >       "size":  "72167568",
	I1124 09:47:03.665221 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665229 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665232 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665236 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665244 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665247 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665254 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665262 1844089 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1124 09:47:03.665269 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665275 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1124 09:47:03.665278 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665285 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665292 1844089 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1124 09:47:03.665299 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665304 1844089 command_runner.go:130] >       "size":  "74105124",
	I1124 09:47:03.665308 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665315 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665319 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665326 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665333 1844089 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1124 09:47:03.665340 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665346 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1124 09:47:03.665353 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665357 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665369 1844089 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1124 09:47:03.665376 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665380 1844089 command_runner.go:130] >       "size":  "49819792",
	I1124 09:47:03.665384 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665388 1844089 command_runner.go:130] >         "value":  "0"
	I1124 09:47:03.665396 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665401 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665405 1844089 command_runner.go:130] >       "pinned":  false
	I1124 09:47:03.665412 1844089 command_runner.go:130] >     },
	I1124 09:47:03.665415 1844089 command_runner.go:130] >     {
	I1124 09:47:03.665426 1844089 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1124 09:47:03.665434 1844089 command_runner.go:130] >       "repoTags":  [
	I1124 09:47:03.665439 1844089 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.665442 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665446 1844089 command_runner.go:130] >       "repoDigests":  [
	I1124 09:47:03.665456 1844089 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1124 09:47:03.665460 1844089 command_runner.go:130] >       ],
	I1124 09:47:03.665469 1844089 command_runner.go:130] >       "size":  "517328",
	I1124 09:47:03.665473 1844089 command_runner.go:130] >       "uid":  {
	I1124 09:47:03.665478 1844089 command_runner.go:130] >         "value":  "65535"
	I1124 09:47:03.665485 1844089 command_runner.go:130] >       },
	I1124 09:47:03.665489 1844089 command_runner.go:130] >       "username":  "",
	I1124 09:47:03.665499 1844089 command_runner.go:130] >       "pinned":  true
	I1124 09:47:03.665506 1844089 command_runner.go:130] >     }
	I1124 09:47:03.665510 1844089 command_runner.go:130] >   ]
	I1124 09:47:03.665517 1844089 command_runner.go:130] > }
	I1124 09:47:03.667798 1844089 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:47:03.667821 1844089 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:47:03.667827 1844089 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:47:03.667924 1844089 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:47:03.668011 1844089 ssh_runner.go:195] Run: crio config
	I1124 09:47:03.726362 1844089 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1124 09:47:03.726390 1844089 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1124 09:47:03.726403 1844089 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1124 09:47:03.726416 1844089 command_runner.go:130] > #
	I1124 09:47:03.726461 1844089 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1124 09:47:03.726469 1844089 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1124 09:47:03.726481 1844089 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1124 09:47:03.726488 1844089 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1124 09:47:03.726498 1844089 command_runner.go:130] > # reload'.
	I1124 09:47:03.726518 1844089 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1124 09:47:03.726529 1844089 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1124 09:47:03.726536 1844089 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1124 09:47:03.726563 1844089 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1124 09:47:03.726573 1844089 command_runner.go:130] > [crio]
	I1124 09:47:03.726579 1844089 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1124 09:47:03.726585 1844089 command_runner.go:130] > # containers images, in this directory.
	I1124 09:47:03.727202 1844089 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1124 09:47:03.727221 1844089 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1124 09:47:03.727766 1844089 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1124 09:47:03.727795 1844089 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1124 09:47:03.728310 1844089 command_runner.go:130] > # imagestore = ""
	I1124 09:47:03.728328 1844089 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1124 09:47:03.728337 1844089 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1124 09:47:03.728921 1844089 command_runner.go:130] > # storage_driver = "overlay"
	I1124 09:47:03.728938 1844089 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1124 09:47:03.728946 1844089 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1124 09:47:03.729270 1844089 command_runner.go:130] > # storage_option = [
	I1124 09:47:03.729595 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.729612 1844089 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1124 09:47:03.729620 1844089 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1124 09:47:03.730268 1844089 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1124 09:47:03.730286 1844089 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1124 09:47:03.730295 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1124 09:47:03.730299 1844089 command_runner.go:130] > # always happen on a node reboot
	I1124 09:47:03.730901 1844089 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1124 09:47:03.730939 1844089 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1124 09:47:03.730951 1844089 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1124 09:47:03.730957 1844089 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1124 09:47:03.731426 1844089 command_runner.go:130] > # version_file_persist = ""
	I1124 09:47:03.731444 1844089 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1124 09:47:03.731453 1844089 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1124 09:47:03.732044 1844089 command_runner.go:130] > # internal_wipe = true
	I1124 09:47:03.732064 1844089 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1124 09:47:03.732071 1844089 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1124 09:47:03.732663 1844089 command_runner.go:130] > # internal_repair = true
	I1124 09:47:03.732708 1844089 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1124 09:47:03.732717 1844089 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1124 09:47:03.732723 1844089 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1124 09:47:03.733344 1844089 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1124 09:47:03.733360 1844089 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1124 09:47:03.733364 1844089 command_runner.go:130] > [crio.api]
	I1124 09:47:03.733370 1844089 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1124 09:47:03.733954 1844089 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1124 09:47:03.733970 1844089 command_runner.go:130] > # IP address on which the stream server will listen.
	I1124 09:47:03.734597 1844089 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1124 09:47:03.734618 1844089 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1124 09:47:03.734638 1844089 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1124 09:47:03.735322 1844089 command_runner.go:130] > # stream_port = "0"
	I1124 09:47:03.735342 1844089 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1124 09:47:03.735920 1844089 command_runner.go:130] > # stream_enable_tls = false
	I1124 09:47:03.735936 1844089 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1124 09:47:03.736379 1844089 command_runner.go:130] > # stream_idle_timeout = ""
	I1124 09:47:03.736427 1844089 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1124 09:47:03.736442 1844089 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1124 09:47:03.736931 1844089 command_runner.go:130] > # stream_tls_cert = ""
	I1124 09:47:03.736947 1844089 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1124 09:47:03.736954 1844089 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1124 09:47:03.737422 1844089 command_runner.go:130] > # stream_tls_key = ""
	I1124 09:47:03.737439 1844089 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1124 09:47:03.737447 1844089 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1124 09:47:03.737466 1844089 command_runner.go:130] > # automatically pick up the changes.
	I1124 09:47:03.737919 1844089 command_runner.go:130] > # stream_tls_ca = ""
	I1124 09:47:03.737973 1844089 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.738690 1844089 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1124 09:47:03.738709 1844089 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1124 09:47:03.739334 1844089 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1124 09:47:03.739351 1844089 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1124 09:47:03.739358 1844089 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1124 09:47:03.739383 1844089 command_runner.go:130] > [crio.runtime]
	I1124 09:47:03.739395 1844089 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1124 09:47:03.739402 1844089 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1124 09:47:03.739406 1844089 command_runner.go:130] > # "nofile=1024:2048"
	I1124 09:47:03.739432 1844089 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1124 09:47:03.739736 1844089 command_runner.go:130] > # default_ulimits = [
	I1124 09:47:03.740060 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.740075 1844089 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1124 09:47:03.740677 1844089 command_runner.go:130] > # no_pivot = false
	I1124 09:47:03.740693 1844089 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1124 09:47:03.740700 1844089 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1124 09:47:03.741305 1844089 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1124 09:47:03.741322 1844089 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1124 09:47:03.741328 1844089 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1124 09:47:03.741356 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.741816 1844089 command_runner.go:130] > # conmon = ""
	I1124 09:47:03.741833 1844089 command_runner.go:130] > # Cgroup setting for conmon
	I1124 09:47:03.741841 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1124 09:47:03.742193 1844089 command_runner.go:130] > conmon_cgroup = "pod"
	I1124 09:47:03.742211 1844089 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1124 09:47:03.742237 1844089 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1124 09:47:03.742253 1844089 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1124 09:47:03.742594 1844089 command_runner.go:130] > # conmon_env = [
	I1124 09:47:03.742962 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.742977 1844089 command_runner.go:130] > # Additional environment variables to set for all the
	I1124 09:47:03.742984 1844089 command_runner.go:130] > # containers. These are overridden if set in the
	I1124 09:47:03.742990 1844089 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1124 09:47:03.743288 1844089 command_runner.go:130] > # default_env = [
	I1124 09:47:03.743607 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.743619 1844089 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1124 09:47:03.743646 1844089 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1124 09:47:03.744217 1844089 command_runner.go:130] > # selinux = false
	I1124 09:47:03.744234 1844089 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1124 09:47:03.744279 1844089 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1124 09:47:03.744293 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.744768 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.744784 1844089 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1124 09:47:03.744790 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745254 1844089 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1124 09:47:03.745273 1844089 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1124 09:47:03.745281 1844089 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1124 09:47:03.745308 1844089 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1124 09:47:03.745322 1844089 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1124 09:47:03.745328 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.745934 1844089 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1124 09:47:03.745975 1844089 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1124 09:47:03.745989 1844089 command_runner.go:130] > # the cgroup blockio controller.
	I1124 09:47:03.746500 1844089 command_runner.go:130] > # blockio_config_file = ""
	I1124 09:47:03.746515 1844089 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1124 09:47:03.746541 1844089 command_runner.go:130] > # blockio parameters.
	I1124 09:47:03.747165 1844089 command_runner.go:130] > # blockio_reload = false
	I1124 09:47:03.747182 1844089 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1124 09:47:03.747187 1844089 command_runner.go:130] > # irqbalance daemon.
	I1124 09:47:03.747784 1844089 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1124 09:47:03.747803 1844089 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1124 09:47:03.747830 1844089 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1124 09:47:03.747843 1844089 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1124 09:47:03.748453 1844089 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1124 09:47:03.748471 1844089 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1124 09:47:03.748496 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.748966 1844089 command_runner.go:130] > # rdt_config_file = ""
	I1124 09:47:03.748982 1844089 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1124 09:47:03.749348 1844089 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1124 09:47:03.749364 1844089 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1124 09:47:03.749770 1844089 command_runner.go:130] > # separate_pull_cgroup = ""
	I1124 09:47:03.749788 1844089 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1124 09:47:03.749796 1844089 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1124 09:47:03.749820 1844089 command_runner.go:130] > # will be added.
	I1124 09:47:03.749833 1844089 command_runner.go:130] > # default_capabilities = [
	I1124 09:47:03.750067 1844089 command_runner.go:130] > # 	"CHOWN",
	I1124 09:47:03.750401 1844089 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1124 09:47:03.750646 1844089 command_runner.go:130] > # 	"FSETID",
	I1124 09:47:03.750659 1844089 command_runner.go:130] > # 	"FOWNER",
	I1124 09:47:03.750665 1844089 command_runner.go:130] > # 	"SETGID",
	I1124 09:47:03.750669 1844089 command_runner.go:130] > # 	"SETUID",
	I1124 09:47:03.750725 1844089 command_runner.go:130] > # 	"SETPCAP",
	I1124 09:47:03.750739 1844089 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1124 09:47:03.750745 1844089 command_runner.go:130] > # 	"KILL",
	I1124 09:47:03.750755 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.750774 1844089 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1124 09:47:03.750785 1844089 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1124 09:47:03.750991 1844089 command_runner.go:130] > # add_inheritable_capabilities = false
	I1124 09:47:03.751004 1844089 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1124 09:47:03.751023 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751034 1844089 command_runner.go:130] > default_sysctls = [
	I1124 09:47:03.751219 1844089 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1124 09:47:03.751480 1844089 command_runner.go:130] > ]
	I1124 09:47:03.751494 1844089 command_runner.go:130] > # List of devices on the host that a
	I1124 09:47:03.751501 1844089 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1124 09:47:03.751522 1844089 command_runner.go:130] > # allowed_devices = [
	I1124 09:47:03.751532 1844089 command_runner.go:130] > # 	"/dev/fuse",
	I1124 09:47:03.751536 1844089 command_runner.go:130] > # 	"/dev/net/tun",
	I1124 09:47:03.751539 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751545 1844089 command_runner.go:130] > # List of additional devices. specified as
	I1124 09:47:03.751558 1844089 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1124 09:47:03.751576 1844089 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1124 09:47:03.751614 1844089 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1124 09:47:03.751625 1844089 command_runner.go:130] > # additional_devices = [
	I1124 09:47:03.751802 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.751816 1844089 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1124 09:47:03.752056 1844089 command_runner.go:130] > # cdi_spec_dirs = [
	I1124 09:47:03.752288 1844089 command_runner.go:130] > # 	"/etc/cdi",
	I1124 09:47:03.752302 1844089 command_runner.go:130] > # 	"/var/run/cdi",
	I1124 09:47:03.752307 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752313 1844089 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1124 09:47:03.752348 1844089 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1124 09:47:03.752353 1844089 command_runner.go:130] > # Defaults to false.
	I1124 09:47:03.752752 1844089 command_runner.go:130] > # device_ownership_from_security_context = false
	I1124 09:47:03.752770 1844089 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1124 09:47:03.752778 1844089 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1124 09:47:03.752782 1844089 command_runner.go:130] > # hooks_dir = [
	I1124 09:47:03.752808 1844089 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1124 09:47:03.752819 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.752826 1844089 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1124 09:47:03.752833 1844089 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1124 09:47:03.752842 1844089 command_runner.go:130] > # its default mounts from the following two files:
	I1124 09:47:03.752845 1844089 command_runner.go:130] > #
	I1124 09:47:03.752852 1844089 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1124 09:47:03.752858 1844089 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1124 09:47:03.752881 1844089 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1124 09:47:03.752891 1844089 command_runner.go:130] > #
	I1124 09:47:03.752897 1844089 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1124 09:47:03.752913 1844089 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1124 09:47:03.752928 1844089 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1124 09:47:03.752934 1844089 command_runner.go:130] > #      only add mounts it finds in this file.
	I1124 09:47:03.752937 1844089 command_runner.go:130] > #
	I1124 09:47:03.752941 1844089 command_runner.go:130] > # default_mounts_file = ""
	I1124 09:47:03.752946 1844089 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1124 09:47:03.752955 1844089 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1124 09:47:03.753190 1844089 command_runner.go:130] > # pids_limit = -1
	I1124 09:47:03.753207 1844089 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1124 09:47:03.753245 1844089 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1124 09:47:03.753260 1844089 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1124 09:47:03.753269 1844089 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1124 09:47:03.753278 1844089 command_runner.go:130] > # log_size_max = -1
	I1124 09:47:03.753287 1844089 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1124 09:47:03.753296 1844089 command_runner.go:130] > # log_to_journald = false
	I1124 09:47:03.753313 1844089 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1124 09:47:03.753722 1844089 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1124 09:47:03.753734 1844089 command_runner.go:130] > # Path to directory for container attach sockets.
	I1124 09:47:03.753771 1844089 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1124 09:47:03.753785 1844089 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1124 09:47:03.753789 1844089 command_runner.go:130] > # bind_mount_prefix = ""
	I1124 09:47:03.753796 1844089 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1124 09:47:03.753804 1844089 command_runner.go:130] > # read_only = false
	I1124 09:47:03.753810 1844089 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1124 09:47:03.753817 1844089 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1124 09:47:03.753824 1844089 command_runner.go:130] > # live configuration reload.
	I1124 09:47:03.753828 1844089 command_runner.go:130] > # log_level = "info"
	I1124 09:47:03.753845 1844089 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1124 09:47:03.753857 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.754025 1844089 command_runner.go:130] > # log_filter = ""
	I1124 09:47:03.754041 1844089 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754049 1844089 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1124 09:47:03.754066 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754079 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754487 1844089 command_runner.go:130] > # uid_mappings = ""
	I1124 09:47:03.754504 1844089 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1124 09:47:03.754512 1844089 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1124 09:47:03.754516 1844089 command_runner.go:130] > # separated by comma.
	I1124 09:47:03.754547 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754559 1844089 command_runner.go:130] > # gid_mappings = ""
	I1124 09:47:03.754565 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1124 09:47:03.754572 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754582 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754590 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754595 1844089 command_runner.go:130] > # minimum_mappable_uid = -1
	I1124 09:47:03.754627 1844089 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1124 09:47:03.754641 1844089 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1124 09:47:03.754648 1844089 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1124 09:47:03.754662 1844089 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1124 09:47:03.754929 1844089 command_runner.go:130] > # minimum_mappable_gid = -1
	I1124 09:47:03.754942 1844089 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1124 09:47:03.754970 1844089 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1124 09:47:03.754983 1844089 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1124 09:47:03.754989 1844089 command_runner.go:130] > # ctr_stop_timeout = 30
	I1124 09:47:03.754994 1844089 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1124 09:47:03.755006 1844089 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1124 09:47:03.755011 1844089 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1124 09:47:03.755016 1844089 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1124 09:47:03.755021 1844089 command_runner.go:130] > # drop_infra_ctr = true
	I1124 09:47:03.755048 1844089 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1124 09:47:03.755061 1844089 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1124 09:47:03.755080 1844089 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1124 09:47:03.755090 1844089 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1124 09:47:03.755098 1844089 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1124 09:47:03.755104 1844089 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1124 09:47:03.755110 1844089 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1124 09:47:03.755118 1844089 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1124 09:47:03.755122 1844089 command_runner.go:130] > # shared_cpuset = ""
	I1124 09:47:03.755135 1844089 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1124 09:47:03.755143 1844089 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1124 09:47:03.755164 1844089 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1124 09:47:03.755182 1844089 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1124 09:47:03.755369 1844089 command_runner.go:130] > # pinns_path = ""
	I1124 09:47:03.755383 1844089 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1124 09:47:03.755391 1844089 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1124 09:47:03.755617 1844089 command_runner.go:130] > # enable_criu_support = true
	I1124 09:47:03.755632 1844089 command_runner.go:130] > # Enable/disable the generation of the container,
	I1124 09:47:03.755639 1844089 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1124 09:47:03.755935 1844089 command_runner.go:130] > # enable_pod_events = false
	I1124 09:47:03.755951 1844089 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1124 09:47:03.755976 1844089 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1124 09:47:03.755988 1844089 command_runner.go:130] > # default_runtime = "crun"
	I1124 09:47:03.756007 1844089 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1124 09:47:03.756063 1844089 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1124 09:47:03.756088 1844089 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1124 09:47:03.756099 1844089 command_runner.go:130] > # creation as a file is not desired either.
	I1124 09:47:03.756108 1844089 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1124 09:47:03.756127 1844089 command_runner.go:130] > # the hostname is being managed dynamically.
	I1124 09:47:03.756133 1844089 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1124 09:47:03.756166 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.756181 1844089 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1124 09:47:03.756199 1844089 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1124 09:47:03.756211 1844089 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1124 09:47:03.756217 1844089 command_runner.go:130] > # Each entry in the table should follow the format:
	I1124 09:47:03.756220 1844089 command_runner.go:130] > #
	I1124 09:47:03.756230 1844089 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1124 09:47:03.756235 1844089 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1124 09:47:03.756244 1844089 command_runner.go:130] > # runtime_type = "oci"
	I1124 09:47:03.756248 1844089 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1124 09:47:03.756253 1844089 command_runner.go:130] > # inherit_default_runtime = false
	I1124 09:47:03.756258 1844089 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1124 09:47:03.756285 1844089 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1124 09:47:03.756297 1844089 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1124 09:47:03.756301 1844089 command_runner.go:130] > # monitor_env = []
	I1124 09:47:03.756306 1844089 command_runner.go:130] > # privileged_without_host_devices = false
	I1124 09:47:03.756313 1844089 command_runner.go:130] > # allowed_annotations = []
	I1124 09:47:03.756319 1844089 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1124 09:47:03.756330 1844089 command_runner.go:130] > # no_sync_log = false
	I1124 09:47:03.756335 1844089 command_runner.go:130] > # default_annotations = {}
	I1124 09:47:03.756339 1844089 command_runner.go:130] > # stream_websockets = false
	I1124 09:47:03.756349 1844089 command_runner.go:130] > # seccomp_profile = ""
	I1124 09:47:03.756390 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.756402 1844089 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1124 09:47:03.756409 1844089 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1124 09:47:03.756416 1844089 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1124 09:47:03.756427 1844089 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1124 09:47:03.756448 1844089 command_runner.go:130] > #   in $PATH.
	I1124 09:47:03.756456 1844089 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1124 09:47:03.756461 1844089 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1124 09:47:03.756468 1844089 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1124 09:47:03.756477 1844089 command_runner.go:130] > #   state.
	I1124 09:47:03.756489 1844089 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1124 09:47:03.756495 1844089 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1124 09:47:03.756515 1844089 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1124 09:47:03.756528 1844089 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1124 09:47:03.756534 1844089 command_runner.go:130] > #   the values from the default runtime on load time.
	I1124 09:47:03.756542 1844089 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1124 09:47:03.756551 1844089 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1124 09:47:03.756557 1844089 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1124 09:47:03.756564 1844089 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1124 09:47:03.756571 1844089 command_runner.go:130] > #   The currently recognized values are:
	I1124 09:47:03.756579 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1124 09:47:03.756608 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1124 09:47:03.756621 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1124 09:47:03.756627 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1124 09:47:03.756635 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1124 09:47:03.756647 1844089 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1124 09:47:03.756654 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1124 09:47:03.756661 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1124 09:47:03.756671 1844089 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1124 09:47:03.756687 1844089 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1124 09:47:03.756700 1844089 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1124 09:47:03.756720 1844089 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1124 09:47:03.756731 1844089 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1124 09:47:03.756738 1844089 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1124 09:47:03.756751 1844089 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1124 09:47:03.756759 1844089 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1124 09:47:03.756769 1844089 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1124 09:47:03.756774 1844089 command_runner.go:130] > #   deprecated option "conmon".
	I1124 09:47:03.756781 1844089 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1124 09:47:03.756803 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1124 09:47:03.756820 1844089 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1124 09:47:03.756831 1844089 command_runner.go:130] > #   should be moved to the container's cgroup
	I1124 09:47:03.756843 1844089 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1124 09:47:03.756853 1844089 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1124 09:47:03.756862 1844089 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1124 09:47:03.756870 1844089 command_runner.go:130] > #   conmon-rs by using:
	I1124 09:47:03.756878 1844089 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1124 09:47:03.756886 1844089 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1124 09:47:03.756907 1844089 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1124 09:47:03.756926 1844089 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1124 09:47:03.756938 1844089 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1124 09:47:03.756945 1844089 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1124 09:47:03.756958 1844089 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1124 09:47:03.756963 1844089 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1124 09:47:03.756972 1844089 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1124 09:47:03.756984 1844089 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1124 09:47:03.756999 1844089 command_runner.go:130] > #   when a machine crash happens.
	I1124 09:47:03.757012 1844089 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1124 09:47:03.757021 1844089 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1124 09:47:03.757033 1844089 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1124 09:47:03.757038 1844089 command_runner.go:130] > #   seccomp profile for the runtime.
	I1124 09:47:03.757047 1844089 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1124 09:47:03.757058 1844089 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1124 09:47:03.757076 1844089 command_runner.go:130] > #
	I1124 09:47:03.757087 1844089 command_runner.go:130] > # Using the seccomp notifier feature:
	I1124 09:47:03.757091 1844089 command_runner.go:130] > #
	I1124 09:47:03.757115 1844089 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1124 09:47:03.757130 1844089 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1124 09:47:03.757134 1844089 command_runner.go:130] > #
	I1124 09:47:03.757141 1844089 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1124 09:47:03.757151 1844089 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1124 09:47:03.757154 1844089 command_runner.go:130] > #
	I1124 09:47:03.757165 1844089 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1124 09:47:03.757172 1844089 command_runner.go:130] > # feature.
	I1124 09:47:03.757175 1844089 command_runner.go:130] > #
	I1124 09:47:03.757195 1844089 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1124 09:47:03.757204 1844089 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1124 09:47:03.757220 1844089 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1124 09:47:03.757233 1844089 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1124 09:47:03.757239 1844089 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1124 09:47:03.757247 1844089 command_runner.go:130] > #
	I1124 09:47:03.757258 1844089 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1124 09:47:03.757268 1844089 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1124 09:47:03.757271 1844089 command_runner.go:130] > #
	I1124 09:47:03.757277 1844089 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1124 09:47:03.757283 1844089 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1124 09:47:03.757298 1844089 command_runner.go:130] > #
	I1124 09:47:03.757320 1844089 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1124 09:47:03.757333 1844089 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1124 09:47:03.757341 1844089 command_runner.go:130] > # limitation.
	I1124 09:47:03.757617 1844089 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1124 09:47:03.757630 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1124 09:47:03.757635 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.757639 1844089 command_runner.go:130] > runtime_root = "/run/crun"
	I1124 09:47:03.757643 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.757670 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.757675 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.757680 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.757690 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.757695 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.757700 1844089 command_runner.go:130] > allowed_annotations = [
	I1124 09:47:03.757954 1844089 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1124 09:47:03.757971 1844089 command_runner.go:130] > ]
	I1124 09:47:03.757978 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.757982 1844089 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1124 09:47:03.758003 1844089 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1124 09:47:03.758013 1844089 command_runner.go:130] > runtime_type = ""
	I1124 09:47:03.758018 1844089 command_runner.go:130] > runtime_root = "/run/runc"
	I1124 09:47:03.758023 1844089 command_runner.go:130] > inherit_default_runtime = false
	I1124 09:47:03.758033 1844089 command_runner.go:130] > runtime_config_path = ""
	I1124 09:47:03.758037 1844089 command_runner.go:130] > container_min_memory = ""
	I1124 09:47:03.758042 1844089 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1124 09:47:03.758047 1844089 command_runner.go:130] > monitor_cgroup = "pod"
	I1124 09:47:03.758051 1844089 command_runner.go:130] > monitor_exec_cgroup = ""
	I1124 09:47:03.758456 1844089 command_runner.go:130] > privileged_without_host_devices = false
	I1124 09:47:03.758471 1844089 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1124 09:47:03.758477 1844089 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1124 09:47:03.758504 1844089 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1124 09:47:03.758514 1844089 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1124 09:47:03.758525 1844089 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1124 09:47:03.758550 1844089 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1124 09:47:03.758572 1844089 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1124 09:47:03.758585 1844089 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1124 09:47:03.758595 1844089 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1124 09:47:03.758608 1844089 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1124 09:47:03.758614 1844089 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1124 09:47:03.758621 1844089 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1124 09:47:03.758629 1844089 command_runner.go:130] > # Example:
	I1124 09:47:03.758634 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1124 09:47:03.758650 1844089 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1124 09:47:03.758663 1844089 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1124 09:47:03.758670 1844089 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1124 09:47:03.758684 1844089 command_runner.go:130] > # cpuset = "0-1"
	I1124 09:47:03.758691 1844089 command_runner.go:130] > # cpushares = "5"
	I1124 09:47:03.758695 1844089 command_runner.go:130] > # cpuquota = "1000"
	I1124 09:47:03.758700 1844089 command_runner.go:130] > # cpuperiod = "100000"
	I1124 09:47:03.758703 1844089 command_runner.go:130] > # cpulimit = "35"
	I1124 09:47:03.758714 1844089 command_runner.go:130] > # Where:
	I1124 09:47:03.758719 1844089 command_runner.go:130] > # The workload name is workload-type.
	I1124 09:47:03.758726 1844089 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1124 09:47:03.758738 1844089 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1124 09:47:03.758744 1844089 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1124 09:47:03.758763 1844089 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1124 09:47:03.758772 1844089 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1124 09:47:03.758787 1844089 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1124 09:47:03.758800 1844089 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1124 09:47:03.758805 1844089 command_runner.go:130] > # Default value is set to true
	I1124 09:47:03.758816 1844089 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1124 09:47:03.758822 1844089 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1124 09:47:03.758827 1844089 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1124 09:47:03.758837 1844089 command_runner.go:130] > # Default value is set to 'false'
	I1124 09:47:03.758841 1844089 command_runner.go:130] > # disable_hostport_mapping = false
	I1124 09:47:03.758846 1844089 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1124 09:47:03.758869 1844089 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1124 09:47:03.759115 1844089 command_runner.go:130] > # timezone = ""
	I1124 09:47:03.759131 1844089 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1124 09:47:03.759134 1844089 command_runner.go:130] > #
	I1124 09:47:03.759141 1844089 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1124 09:47:03.759163 1844089 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1124 09:47:03.759174 1844089 command_runner.go:130] > [crio.image]
	I1124 09:47:03.759180 1844089 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1124 09:47:03.759194 1844089 command_runner.go:130] > # default_transport = "docker://"
	I1124 09:47:03.759204 1844089 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1124 09:47:03.759211 1844089 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759215 1844089 command_runner.go:130] > # global_auth_file = ""
	I1124 09:47:03.759237 1844089 command_runner.go:130] > # The image used to instantiate infra containers.
	I1124 09:47:03.759259 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759457 1844089 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1124 09:47:03.759477 1844089 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1124 09:47:03.759497 1844089 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1124 09:47:03.759511 1844089 command_runner.go:130] > # This option supports live configuration reload.
	I1124 09:47:03.759702 1844089 command_runner.go:130] > # pause_image_auth_file = ""
	I1124 09:47:03.759716 1844089 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1124 09:47:03.759723 1844089 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1124 09:47:03.759742 1844089 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1124 09:47:03.759757 1844089 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1124 09:47:03.760047 1844089 command_runner.go:130] > # pause_command = "/pause"
	I1124 09:47:03.760064 1844089 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1124 09:47:03.760071 1844089 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1124 09:47:03.760077 1844089 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1124 09:47:03.760108 1844089 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1124 09:47:03.760115 1844089 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1124 09:47:03.760126 1844089 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1124 09:47:03.760131 1844089 command_runner.go:130] > # pinned_images = [
	I1124 09:47:03.760134 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760140 1844089 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1124 09:47:03.760146 1844089 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1124 09:47:03.760157 1844089 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1124 09:47:03.760175 1844089 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1124 09:47:03.760186 1844089 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1124 09:47:03.760191 1844089 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1124 09:47:03.760197 1844089 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1124 09:47:03.760209 1844089 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1124 09:47:03.760216 1844089 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1124 09:47:03.760225 1844089 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1124 09:47:03.760231 1844089 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1124 09:47:03.760246 1844089 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1124 09:47:03.760260 1844089 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1124 09:47:03.760282 1844089 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1124 09:47:03.760292 1844089 command_runner.go:130] > # changing them here.
	I1124 09:47:03.760298 1844089 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1124 09:47:03.760302 1844089 command_runner.go:130] > # insecure_registries = [
	I1124 09:47:03.760312 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.760318 1844089 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1124 09:47:03.760329 1844089 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1124 09:47:03.760704 1844089 command_runner.go:130] > # image_volumes = "mkdir"
	I1124 09:47:03.760720 1844089 command_runner.go:130] > # Temporary directory to use for storing big files
	I1124 09:47:03.760964 1844089 command_runner.go:130] > # big_files_temporary_dir = ""
	I1124 09:47:03.760980 1844089 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1124 09:47:03.760987 1844089 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1124 09:47:03.760992 1844089 command_runner.go:130] > # auto_reload_registries = false
	I1124 09:47:03.761030 1844089 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1124 09:47:03.761047 1844089 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1124 09:47:03.761054 1844089 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1124 09:47:03.761232 1844089 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1124 09:47:03.761247 1844089 command_runner.go:130] > # The mode of short name resolution.
	I1124 09:47:03.761255 1844089 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1124 09:47:03.761263 1844089 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1124 09:47:03.761289 1844089 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1124 09:47:03.761475 1844089 command_runner.go:130] > # short_name_mode = "enforcing"
	I1124 09:47:03.761491 1844089 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1124 09:47:03.761498 1844089 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1124 09:47:03.761714 1844089 command_runner.go:130] > # oci_artifact_mount_support = true
	I1124 09:47:03.761730 1844089 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1124 09:47:03.761735 1844089 command_runner.go:130] > # CNI plugins.
	I1124 09:47:03.761738 1844089 command_runner.go:130] > [crio.network]
	I1124 09:47:03.761777 1844089 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1124 09:47:03.761790 1844089 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1124 09:47:03.761797 1844089 command_runner.go:130] > # cni_default_network = ""
	I1124 09:47:03.761810 1844089 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1124 09:47:03.761814 1844089 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1124 09:47:03.761820 1844089 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1124 09:47:03.761839 1844089 command_runner.go:130] > # plugin_dirs = [
	I1124 09:47:03.762075 1844089 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1124 09:47:03.762088 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762092 1844089 command_runner.go:130] > # List of included pod metrics.
	I1124 09:47:03.762097 1844089 command_runner.go:130] > # included_pod_metrics = [
	I1124 09:47:03.762100 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.762106 1844089 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1124 09:47:03.762124 1844089 command_runner.go:130] > [crio.metrics]
	I1124 09:47:03.762136 1844089 command_runner.go:130] > # Globally enable or disable metrics support.
	I1124 09:47:03.762321 1844089 command_runner.go:130] > # enable_metrics = false
	I1124 09:47:03.762336 1844089 command_runner.go:130] > # Specify enabled metrics collectors.
	I1124 09:47:03.762342 1844089 command_runner.go:130] > # Per default all metrics are enabled.
	I1124 09:47:03.762349 1844089 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1124 09:47:03.762356 1844089 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1124 09:47:03.762386 1844089 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1124 09:47:03.762392 1844089 command_runner.go:130] > # metrics_collectors = [
	I1124 09:47:03.763119 1844089 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1124 09:47:03.763143 1844089 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1124 09:47:03.763149 1844089 command_runner.go:130] > # 	"containers_oom_total",
	I1124 09:47:03.763153 1844089 command_runner.go:130] > # 	"processes_defunct",
	I1124 09:47:03.763188 1844089 command_runner.go:130] > # 	"operations_total",
	I1124 09:47:03.763201 1844089 command_runner.go:130] > # 	"operations_latency_seconds",
	I1124 09:47:03.763207 1844089 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1124 09:47:03.763212 1844089 command_runner.go:130] > # 	"operations_errors_total",
	I1124 09:47:03.763216 1844089 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1124 09:47:03.763221 1844089 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1124 09:47:03.763226 1844089 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1124 09:47:03.763237 1844089 command_runner.go:130] > # 	"image_pulls_success_total",
	I1124 09:47:03.763260 1844089 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1124 09:47:03.763265 1844089 command_runner.go:130] > # 	"containers_oom_count_total",
	I1124 09:47:03.763270 1844089 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1124 09:47:03.763282 1844089 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1124 09:47:03.763286 1844089 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1124 09:47:03.763290 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763295 1844089 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1124 09:47:03.763300 1844089 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1124 09:47:03.763305 1844089 command_runner.go:130] > # The port on which the metrics server will listen.
	I1124 09:47:03.763313 1844089 command_runner.go:130] > # metrics_port = 9090
	I1124 09:47:03.763327 1844089 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1124 09:47:03.763337 1844089 command_runner.go:130] > # metrics_socket = ""
	I1124 09:47:03.763343 1844089 command_runner.go:130] > # The certificate for the secure metrics server.
	I1124 09:47:03.763349 1844089 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1124 09:47:03.763360 1844089 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1124 09:47:03.763365 1844089 command_runner.go:130] > # certificate on any modification event.
	I1124 09:47:03.763369 1844089 command_runner.go:130] > # metrics_cert = ""
	I1124 09:47:03.763375 1844089 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1124 09:47:03.763379 1844089 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1124 09:47:03.763384 1844089 command_runner.go:130] > # metrics_key = ""
	I1124 09:47:03.763415 1844089 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1124 09:47:03.763426 1844089 command_runner.go:130] > [crio.tracing]
	I1124 09:47:03.763442 1844089 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1124 09:47:03.763451 1844089 command_runner.go:130] > # enable_tracing = false
	I1124 09:47:03.763456 1844089 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1124 09:47:03.763461 1844089 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1124 09:47:03.763468 1844089 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1124 09:47:03.763476 1844089 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1124 09:47:03.763481 1844089 command_runner.go:130] > # CRI-O NRI configuration.
	I1124 09:47:03.763500 1844089 command_runner.go:130] > [crio.nri]
	I1124 09:47:03.763505 1844089 command_runner.go:130] > # Globally enable or disable NRI.
	I1124 09:47:03.763508 1844089 command_runner.go:130] > # enable_nri = true
	I1124 09:47:03.763524 1844089 command_runner.go:130] > # NRI socket to listen on.
	I1124 09:47:03.763535 1844089 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1124 09:47:03.763540 1844089 command_runner.go:130] > # NRI plugin directory to use.
	I1124 09:47:03.763544 1844089 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1124 09:47:03.763552 1844089 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1124 09:47:03.763560 1844089 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1124 09:47:03.763566 1844089 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1124 09:47:03.763634 1844089 command_runner.go:130] > # nri_disable_connections = false
	I1124 09:47:03.763648 1844089 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1124 09:47:03.763654 1844089 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1124 09:47:03.763669 1844089 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1124 09:47:03.763681 1844089 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1124 09:47:03.763685 1844089 command_runner.go:130] > # NRI default validator configuration.
	I1124 09:47:03.763692 1844089 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1124 09:47:03.763699 1844089 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1124 09:47:03.763703 1844089 command_runner.go:130] > # can be restricted/rejected:
	I1124 09:47:03.763707 1844089 command_runner.go:130] > # - OCI hook injection
	I1124 09:47:03.763719 1844089 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1124 09:47:03.763724 1844089 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1124 09:47:03.763730 1844089 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1124 09:47:03.763748 1844089 command_runner.go:130] > # - adjustment of linux namespaces
	I1124 09:47:03.763770 1844089 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1124 09:47:03.763778 1844089 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1124 09:47:03.763789 1844089 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1124 09:47:03.763792 1844089 command_runner.go:130] > #
	I1124 09:47:03.763797 1844089 command_runner.go:130] > # [crio.nri.default_validator]
	I1124 09:47:03.763802 1844089 command_runner.go:130] > # nri_enable_default_validator = false
	I1124 09:47:03.763807 1844089 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1124 09:47:03.763813 1844089 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1124 09:47:03.763843 1844089 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1124 09:47:03.763859 1844089 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1124 09:47:03.763864 1844089 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1124 09:47:03.763875 1844089 command_runner.go:130] > # nri_validator_required_plugins = [
	I1124 09:47:03.763879 1844089 command_runner.go:130] > # ]
	I1124 09:47:03.763885 1844089 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1124 09:47:03.763897 1844089 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1124 09:47:03.763900 1844089 command_runner.go:130] > [crio.stats]
	I1124 09:47:03.763906 1844089 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1124 09:47:03.763912 1844089 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1124 09:47:03.763930 1844089 command_runner.go:130] > # stats_collection_period = 0
	I1124 09:47:03.763938 1844089 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1124 09:47:03.763955 1844089 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1124 09:47:03.763966 1844089 command_runner.go:130] > # collection_period = 0
	I1124 09:47:03.765749 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69660512Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1124 09:47:03.765775 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696644858Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1124 09:47:03.765802 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696680353Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1124 09:47:03.765817 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696705773Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1124 09:47:03.765831 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.696792248Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:47:03.765844 1844089 command_runner.go:130] ! time="2025-11-24T09:47:03.69715048Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1124 09:47:03.765855 1844089 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1124 09:47:03.766230 1844089 cni.go:84] Creating CNI manager for ""
	I1124 09:47:03.766250 1844089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:47:03.766285 1844089 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:47:03.766313 1844089 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:47:03.766550 1844089 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:47:03.766656 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:47:03.773791 1844089 command_runner.go:130] > kubeadm
	I1124 09:47:03.773812 1844089 command_runner.go:130] > kubectl
	I1124 09:47:03.773818 1844089 command_runner.go:130] > kubelet
	I1124 09:47:03.774893 1844089 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:47:03.774995 1844089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:47:03.782726 1844089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:47:03.796280 1844089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:47:03.809559 1844089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 09:47:03.822485 1844089 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:47:03.826210 1844089 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1124 09:47:03.826334 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:03.934288 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:04.458773 1844089 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:47:04.458800 1844089 certs.go:195] generating shared ca certs ...
	I1124 09:47:04.458824 1844089 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:04.458988 1844089 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:47:04.459071 1844089 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:47:04.459080 1844089 certs.go:257] generating profile certs ...
	I1124 09:47:04.459195 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:47:04.459263 1844089 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:47:04.459319 1844089 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:47:04.459333 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1124 09:47:04.459352 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1124 09:47:04.459364 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1124 09:47:04.459374 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1124 09:47:04.459384 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1124 09:47:04.459403 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1124 09:47:04.459415 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1124 09:47:04.459426 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1124 09:47:04.459482 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:47:04.459525 1844089 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:47:04.459534 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:47:04.459574 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:47:04.459609 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:47:04.459638 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:47:04.459701 1844089 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:47:04.459738 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.459752 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem -> /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.459763 1844089 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.460411 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:47:04.483964 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:47:04.505086 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:47:04.526066 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:47:04.552811 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:47:04.572010 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:47:04.590830 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:47:04.609063 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:47:04.627178 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:47:04.645228 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:47:04.662875 1844089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:47:04.680934 1844089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:47:04.694072 1844089 ssh_runner.go:195] Run: openssl version
	I1124 09:47:04.700410 1844089 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1124 09:47:04.700488 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:47:04.708800 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712351 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712441 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.712518 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:47:04.755374 1844089 command_runner.go:130] > 3ec20f2e
	I1124 09:47:04.755866 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:47:04.763956 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:47:04.772579 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776497 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776523 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.776574 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:47:04.817126 1844089 command_runner.go:130] > b5213941
	I1124 09:47:04.817555 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:47:04.825631 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:47:04.834323 1844089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838391 1844089 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838437 1844089 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.838503 1844089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:47:04.879479 1844089 command_runner.go:130] > 51391683
	I1124 09:47:04.879964 1844089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:47:04.888201 1844089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892298 1844089 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:47:04.892323 1844089 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1124 09:47:04.892330 1844089 command_runner.go:130] > Device: 259,1	Inode: 1049847     Links: 1
	I1124 09:47:04.892337 1844089 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1124 09:47:04.892344 1844089 command_runner.go:130] > Access: 2025-11-24 09:42:55.781942492 +0000
	I1124 09:47:04.892349 1844089 command_runner.go:130] > Modify: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892354 1844089 command_runner.go:130] > Change: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892360 1844089 command_runner.go:130] >  Birth: 2025-11-24 09:38:52.266867059 +0000
	I1124 09:47:04.892420 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:47:04.935687 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.935791 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:47:04.977560 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:04.978011 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:47:05.021496 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.021984 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:47:05.064844 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.065359 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:47:05.108127 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.108275 1844089 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:47:05.149417 1844089 command_runner.go:130] > Certificate will not expire
	I1124 09:47:05.149874 1844089 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:47:05.149970 1844089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:47:05.150065 1844089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:47:05.178967 1844089 cri.go:89] found id: ""
	I1124 09:47:05.179068 1844089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:47:05.186015 1844089 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1124 09:47:05.186039 1844089 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1124 09:47:05.186047 1844089 command_runner.go:130] > /var/lib/minikube/etcd:
	I1124 09:47:05.187003 1844089 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:47:05.187020 1844089 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:47:05.187103 1844089 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:47:05.195380 1844089 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:47:05.195777 1844089 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-373432" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.195884 1844089 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-1804834/kubeconfig needs updating (will repair): [kubeconfig missing "functional-373432" cluster setting kubeconfig missing "functional-373432" context setting]
	I1124 09:47:05.196176 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.196576 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.196729 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.197389 1844089 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 09:47:05.197410 1844089 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 09:47:05.197417 1844089 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 09:47:05.197421 1844089 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 09:47:05.197425 1844089 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 09:47:05.197478 1844089 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1124 09:47:05.197834 1844089 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:47:05.206841 1844089 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1124 09:47:05.206877 1844089 kubeadm.go:602] duration metric: took 19.851198ms to restartPrimaryControlPlane
	I1124 09:47:05.206901 1844089 kubeadm.go:403] duration metric: took 57.044926ms to StartCluster
	I1124 09:47:05.206915 1844089 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.206989 1844089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.207632 1844089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:47:05.208100 1844089 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:47:05.207869 1844089 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 09:47:05.208216 1844089 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:47:05.208554 1844089 addons.go:70] Setting storage-provisioner=true in profile "functional-373432"
	I1124 09:47:05.208570 1844089 addons.go:239] Setting addon storage-provisioner=true in "functional-373432"
	I1124 09:47:05.208595 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.208650 1844089 addons.go:70] Setting default-storageclass=true in profile "functional-373432"
	I1124 09:47:05.208696 1844089 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-373432"
	I1124 09:47:05.208964 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.209057 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.215438 1844089 out.go:179] * Verifying Kubernetes components...
	I1124 09:47:05.218563 1844089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:47:05.247382 1844089 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:47:05.249311 1844089 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:47:05.249495 1844089 kapi.go:59] client config for functional-373432: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 09:47:05.249781 1844089 addons.go:239] Setting addon default-storageclass=true in "functional-373432"
	I1124 09:47:05.249815 1844089 host.go:66] Checking if "functional-373432" exists ...
	I1124 09:47:05.250242 1844089 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:47:05.250436 1844089 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.250452 1844089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:47:05.250491 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.282635 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.300501 1844089 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:05.300528 1844089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:47:05.300592 1844089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:47:05.336568 1844089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:47:05.425988 1844089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:47:05.454084 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:05.488439 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.208671 1844089 node_ready.go:35] waiting up to 6m0s for node "functional-373432" to be "Ready" ...
	I1124 09:47:06.208714 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208746 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208771 1844089 retry.go:31] will retry after 239.578894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.208823 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.208836 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208841 1844089 retry.go:31] will retry after 363.194189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.208887 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.209209 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.448577 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:06.513317 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.513406 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.513430 1844089 retry.go:31] will retry after 455.413395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.572567 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:06.636310 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:06.636351 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.636371 1844089 retry.go:31] will retry after 493.81878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:06.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:06.709791 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:06.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:06.969606 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.043721 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.043767 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.043786 1844089 retry.go:31] will retry after 737.997673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.130919 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.189702 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.189740 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.189777 1844089 retry.go:31] will retry after 362.835066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.209918 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.209989 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.210325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.552843 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:07.609433 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.612888 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.612921 1844089 retry.go:31] will retry after 813.541227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.709061 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:07.709150 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:07.709464 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:07.782677 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:07.840776 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:07.844096 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:07.844127 1844089 retry.go:31] will retry after 1.225797654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.209825 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.209923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.210302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:08.210357 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:08.426707 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:08.489610 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:08.489648 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.489666 1844089 retry.go:31] will retry after 1.230621023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:08.709036 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:08.709146 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:08.709492 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.070184 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:09.132816 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.132856 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.132877 1844089 retry.go:31] will retry after 1.628151176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.209565 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.709579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:09.709673 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:09.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:09.721235 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:09.779532 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:09.779572 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:09.779591 1844089 retry.go:31] will retry after 1.535326746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.208957 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:10.709858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:10.709945 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:10.710278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:10.710329 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:10.761451 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:10.821517 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:10.825161 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:10.825191 1844089 retry.go:31] will retry after 2.22755575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.209753 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.209827 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.210169 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:11.315630 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:11.371370 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:11.375223 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.375258 1844089 retry.go:31] will retry after 3.052255935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:11.709710 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:11.709783 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:11.710113 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.208839 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.208935 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.209276 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:12.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:12.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:12.709439 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:13.052884 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:13.107513 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:13.110665 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.110696 1844089 retry.go:31] will retry after 2.047132712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:13.208986 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:13.209499 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:13.708863 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:13.708946 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:13.709225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:14.428018 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:14.497830 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:14.500554 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.500586 1844089 retry.go:31] will retry after 5.866686171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:14.708931 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:14.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:14.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.158123 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:15.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.208926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.209197 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:15.236504 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:15.240097 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.240134 1844089 retry.go:31] will retry after 4.86514919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:15.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:15.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:15.710246 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:15.710298 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:16.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.209082 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:16.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:16.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:16.709395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.209050 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:17.708987 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:17.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:17.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:18.208849 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.208918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.209189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:18.209229 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:18.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:18.708962 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:18.709278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:19.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:19.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:19.709232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:20.105978 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:20.163220 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.166411 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.166455 1844089 retry.go:31] will retry after 7.973407294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.209623 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.209700 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.210040 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:20.210093 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:20.367494 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:20.426176 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:20.426221 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.426244 1844089 retry.go:31] will retry after 7.002953248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:20.709713 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:20.709786 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:20.710109 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.208846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.208922 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:21.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:21.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:21.709365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.209249 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.209597 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:22.709231 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:22.709348 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:22.709682 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:22.709735 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:23.209559 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.209633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.209953 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:23.709725 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:23.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:23.710141 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.209255 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:24.708973 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:24.709052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:24.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:25.209389 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.209467 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.209841 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:25.209903 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:25.709642 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:25.709719 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:25.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.209709 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.210119 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:26.709913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:26.709992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:26.710307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.208828 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.208902 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.209226 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:27.429779 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:27.489021 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:27.489061 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.489078 1844089 retry.go:31] will retry after 11.455669174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:27.709620 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:27.709697 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:27.710061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:27.710112 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:28.140690 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:28.207909 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:28.207963 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.207981 1844089 retry.go:31] will retry after 7.295318191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:28.208971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.209358 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:28.709045 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:28.709130 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:28.709479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.209267 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.209347 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.209673 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:29.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:29.709959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:29.710312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:29.710375 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:30.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.209713 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.210010 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:30.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:30.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:30.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.208899 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:31.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:31.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:31.709282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:32.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.209035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.209376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:32.209432 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:32.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:32.709024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:32.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.208858 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.208927 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.209204 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:33.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:33.709003 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:33.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:34.208983 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.209403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:34.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:34.709379 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:34.709553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:34.709927 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.209738 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.209811 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.210108 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:35.503497 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:35.564590 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:35.564633 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.564653 1844089 retry.go:31] will retry after 18.757863028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:35.709881 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:35.709958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:35.710297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.208842 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.208909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.209196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:36.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:36.708965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:36.709288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:36.709337 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:37.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.209034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:37.708926 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:37.708999 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:37.709305 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:38.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:38.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:38.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:38.709418 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:38.945958 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:39.002116 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:39.006563 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.006598 1844089 retry.go:31] will retry after 17.731618054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:39.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.209830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.210101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:39.708971 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:39.709049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:39.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.209137 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:40.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:40.709279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:40.709607 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:40.709669 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:41.209237 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:41.709465 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:41.709538 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:41.709862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.209660 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.209740 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.210065 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:42.709826 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:42.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:42.710247 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:42.710300 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:43.208851 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.208929 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.209238 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:43.708832 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:43.708904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:43.709198 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.209292 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:44.709200 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:44.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:44.709637 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:45.209579 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.209674 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.210095 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:45.210174 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:45.708846 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:45.708926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:45.709257 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:46.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:46.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:46.709348 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.208969 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:47.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:47.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:47.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:47.709460 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:48.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:48.708913 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:48.708985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:48.709311 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.209041 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:49.709341 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:49.709413 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:49.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:49.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:50.209504 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.209579 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.209916 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:50.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:50.709795 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:50.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.209819 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.210144 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:51.708840 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:51.708913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:51.709251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:52.208995 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.209079 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.209450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:52.209504 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:52.709193 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:52.709263 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:52.709579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.209019 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:53.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:53.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:53.709514 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.208914 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.208983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:54.323627 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:47:54.379391 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:54.382809 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.382842 1844089 retry.go:31] will retry after 21.097681162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:54.709482 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:54.709561 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:54.709905 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:54.709960 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:55.209834 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.209915 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.210225 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:55.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:55.708984 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:55.709297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.209078 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.709184 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:56.709266 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:56.709603 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:56.738841 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:47:56.794457 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:47:56.797830 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:56.797870 1844089 retry.go:31] will retry after 32.033139138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:47:57.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.209553 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.209864 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:57.209918 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:47:57.709718 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:57.709790 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:57.710100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.209898 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.209970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.210337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:58.709037 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:58.709135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:58.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.209241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.209573 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:47:59.709578 1844089 type.go:168] "Request Body" body=""
	I1124 09:47:59.709657 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:47:59.710027 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:47:59.710084 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:00.211215 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.211305 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.211621 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:00.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:00.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:00.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.208998 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.209081 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:01.708891 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:01.708967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:01.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:02.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.209136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.209526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:02.209599 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:02.709293 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:02.709375 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:02.709754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.209529 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.209595 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.209866 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:03.709708 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:03.709780 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:03.710093 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:04.209893 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.209965 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.210332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:04.210385 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:04.709021 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:04.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:04.709445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.209464 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.209551 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.209872 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:05.709670 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:05.709745 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:05.710155 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.209763 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.209847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:06.708847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:06.708923 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:06.709285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:06.709340 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:07.208931 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.209383 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:07.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:07.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:07.709326 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:08.709122 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:08.709201 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:08.709539 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:08.709592 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:09.209218 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.209284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.209536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:09.709509 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:09.709587 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:09.709963 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.209602 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.209679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.209999 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:10.709702 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:10.709772 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:10.710032 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:10.710072 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:11.209870 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.209951 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.210285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:11.708984 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:11.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:11.709443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.208994 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:12.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:12.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:12.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:13.209062 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.209163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:13.209567 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:13.709210 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:13.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:13.709665 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.209027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:14.708929 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:14.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:14.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:15.209506 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.209583 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.209851 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:15.209900 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:15.481440 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:15.543475 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:15.543517 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.543536 1844089 retry.go:31] will retry after 17.984212056s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:48:15.709841 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:15.709917 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:15.710203 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.208972 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.209053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.209359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:16.708920 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:16.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:16.709254 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.209025 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.209445 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:17.709181 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:17.709254 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:17.709571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:17.709636 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:18.209204 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.209276 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:18.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:18.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.209167 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.209240 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:19.709543 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:19.709616 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:19.709867 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:19.709908 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:20.209743 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.209813 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.210142 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:20.708844 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:20.708918 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:20.709248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:21.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:21.709064 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:21.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:22.209022 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.209096 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.209401 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:22.209447 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:22.708989 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:22.709065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:22.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.209381 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:23.709077 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:23.709165 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:23.709527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:24.209256 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.209332 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.209659 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:24.209710 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:24.709523 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:24.709594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:24.709919 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.209714 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.209794 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.210176 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:25.709866 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:25.709934 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:25.710232 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.209437 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:26.709174 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:26.709252 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:26.709562 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:26.709621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:27.209207 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.209330 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.209681 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:27.709493 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:27.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:27.709901 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.209534 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.209607 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.209945 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:28.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:28.709691 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:28.709984 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:28.710042 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:28.831261 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:48:28.892751 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892791 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:28.892882 1844089 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:29.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:29.709415 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:29.709488 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:29.709832 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.209666 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.209735 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.209996 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:30.709837 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:30.709912 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:30.710250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:30.710310 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:31.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.209451 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:31.708995 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:31.709068 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:31.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.209127 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.209200 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.209540 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:32.709251 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:32.709359 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:32.709688 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:33.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:33.209573 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:33.528038 1844089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:48:33.587216 1844089 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587268 1844089 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:48:33.587355 1844089 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1124 09:48:33.590586 1844089 out.go:179] * Enabled addons: 
	I1124 09:48:33.594109 1844089 addons.go:530] duration metric: took 1m28.385890989s for enable addons: enabled=[]
	I1124 09:48:33.709504 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:33.709580 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:33.709909 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.209684 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.209763 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.210103 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:34.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:34.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:34.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:35.209792 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.209867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.210196 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:35.210254 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:35.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:35.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:35.709406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.208901 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.209290 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:36.708942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:36.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:36.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.209089 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.209182 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:37.708988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:37.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:37.709398 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:38.208956 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.209049 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.209393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:38.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:38.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:38.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.209063 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.209144 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.209398 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:39.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:39.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:39.709762 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:39.709826 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:40.209362 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.209445 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.209801 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:40.709616 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:40.709695 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:40.710016 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.209808 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.209911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.210242 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:41.708947 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:41.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:41.709450 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:42.209333 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.209441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.209737 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:42.209782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:42.709513 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:42.709593 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:42.709913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.209705 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.209787 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.210136 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:43.709811 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:43.709882 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:43.710135 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.208840 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.208916 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:44.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:44.709053 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:44.709434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:44.709491 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:45.209557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.210004 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:45.709853 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:45.709947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:45.710263 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.208973 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.209436 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:46.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:46.708971 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:46.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:47.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:47.209423 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:47.708928 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:47.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:47.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.209017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.209370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:48.709090 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:48.709181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:48.709512 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:49.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:49.209487 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:49.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:49.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:49.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.209043 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:50.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:50.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:50.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.208831 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.209321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:51.708959 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:51.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:51.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:51.709417 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:52.209136 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.209213 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.209591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:52.709205 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:52.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:52.709536 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.208961 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.209062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:53.709175 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:53.709255 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:53.709599 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:53.709661 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:54.209206 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.209288 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:54.709557 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:54.709679 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:54.709998 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.209740 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.210158 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:55.708864 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:55.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:55.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:56.208988 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.209080 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:56.209502 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:56.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:56.709284 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:56.709658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.209431 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.209503 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.209825 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:57.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:57.709393 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:57.709781 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:58.209591 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.209670 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.210036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:48:58.210095 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:48:58.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:58.709861 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:58.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.208919 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:48:59.709435 1844089 type.go:168] "Request Body" body=""
	I1124 09:48:59.709520 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:48:59.709836 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:00.209722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.210110 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:00.210156 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:00.709882 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:00.709966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:00.710301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.208997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:01.709044 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:01.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:01.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.209373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:02.708979 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:02.709069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:02.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:02.709406 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:03.208942 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.209309 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:03.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:03.709027 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:03.709334 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.208982 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.209059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:04.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:04.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:04.709678 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:04.709782 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:05.209548 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.209645 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.209977 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:05.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:05.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:05.710166 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.208981 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.209051 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.209332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:06.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:06.709004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:06.709332 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:07.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.209086 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.209494 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:07.209563 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:07.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:07.709139 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:07.709391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.209054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.209399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:08.709011 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:08.709085 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:08.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.209052 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.209488 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:09.709362 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:09.709442 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:09.709796 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:09.709855 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:10.209613 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.209690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:10.709735 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:10.709803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:10.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.209881 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.209958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.210304 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:11.708941 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:11.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:11.709359 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:12.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:12.209396 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:12.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:12.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:12.709325 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.209056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.209385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:13.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:13.709008 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:13.709380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:14.209165 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.209238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.209577 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:14.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:14.709397 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:14.709478 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:14.709814 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.209760 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.210102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:15.709873 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:15.709949 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:15.710282 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.209016 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:16.709074 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:16.709163 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:16.709419 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:16.709459 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:17.209141 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.209215 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.209563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:17.709286 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:17.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:17.709666 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.209424 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.209499 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.209754 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:18.709505 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:18.709585 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:18.709897 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:18.709953 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:19.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.209779 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.210117 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:19.709834 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:19.709909 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:19.710183 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.209023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:20.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:20.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:20.709426 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:21.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.209362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:21.209415 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:21.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:21.709029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:21.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.209126 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.209204 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.209575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:22.709212 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:22.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:22.709550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:23.209231 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.209319 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.209670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:23.209763 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:23.709555 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:23.709633 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:23.709995 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.209767 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.209841 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.210100 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:24.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:24.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:24.709526 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:25.209328 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.209411 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.209756 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:25.209816 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:25.709508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:25.709600 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:25.709938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.209774 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.209856 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.210202 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:26.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:26.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:26.709369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:27.209746 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.209815 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.210131 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:27.210184 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:27.708830 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:27.708905 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:27.709289 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.208880 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.208957 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.209307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:28.708922 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:28.709007 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:28.709327 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.209020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.209365 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:29.709345 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:29.709441 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:29.709777 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:29.709838 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:30.209612 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.209687 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.209958 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:30.709722 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:30.709798 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:30.710129 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.208884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.209299 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:31.708974 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:31.709250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:32.208916 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.208993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:32.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:32.708937 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:32.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:32.709368 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.208919 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.208994 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.209330 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:33.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:33.709056 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:33.709413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:34.209151 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.209227 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:34.209646 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:34.709436 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:34.709506 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:34.709774 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.209725 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.209803 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.210160 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:35.708884 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:35.708977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:35.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.208912 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.209323 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:36.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:36.709095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:36.709458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:36.709524 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:37.209047 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.209151 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:37.709220 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:37.709324 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:37.709631 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.209508 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.209592 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.209964 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:38.709785 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:38.709869 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:38.710199 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:38.710257 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:39.208814 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.208884 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.209168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:39.709057 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:39.709156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:39.709501 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.209097 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.209195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:40.709222 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:40.709295 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:40.709630 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:41.209317 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.209397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.209747 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:41.209802 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:41.709569 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:41.709654 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:41.709993 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.209817 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.209904 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.210200 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:42.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:42.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:42.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.209070 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.209548 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:43.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:43.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:43.709575 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:43.709620 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:44.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.209044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.209456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:44.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:44.709401 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:44.709783 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.209860 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.209959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.210271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:45.708945 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:45.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:45.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:46.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.209515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:46.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:46.709202 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:46.709268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:46.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.209030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.209384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:47.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:47.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:47.709402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.209161 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.209414 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:48.709091 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:48.709194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:48.709569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:48.709627 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:49.209307 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.209384 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.209719 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:49.709527 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:49.709599 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:49.709865 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.209620 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.209699 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.210039 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:50.709717 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:50.709799 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:50.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:50.710183 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:51.208825 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.208894 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.209172 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:51.708925 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:51.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:51.709349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.208955 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:52.708893 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:52.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:52.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:53.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.209349 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:53.209399 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:53.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:53.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:53.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.208920 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.209318 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:54.709373 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:54.709458 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:54.709760 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:55.209592 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.209668 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.209978 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:55.210040 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:55.709775 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:55.709849 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:55.710161 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.208943 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.209271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:56.708876 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:56.708959 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:56.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.208866 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.208977 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.209285 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:57.708997 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:57.709072 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:57.709427 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:57.709482 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:49:58.209166 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.209246 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.209658 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:58.709454 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:58.709524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:58.709780 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.209521 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.209598 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.209934 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:49:59.709770 1844089 type.go:168] "Request Body" body=""
	I1124 09:49:59.709854 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:49:59.710168 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:49:59.710230 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:00.208926 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.209004 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.210913 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:50:00.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:00.709842 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:00.710201 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.209315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:01.709014 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:01.709093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:01.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:02.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.209057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.209443 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:02.209542 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:02.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:02.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:02.709389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.209380 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:03.708939 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:03.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:03.709357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.208970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.209268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:04.709182 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:04.709269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:04.709623 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:04.709678 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:05.209442 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.209524 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.209862 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:05.709612 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:05.709690 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:05.710022 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.209806 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.209880 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.210219 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:06.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:06.709013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:06.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:07.209084 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.209187 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.209448 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:07.209497 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:07.709139 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:07.709341 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:07.710017 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.209829 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.209903 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:08.708897 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:08.708964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:08.709236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.208927 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.209002 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.209378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:09.708935 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:09.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:09.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:09.709424 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:10.208903 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.208975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.209331 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:10.708967 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:10.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:10.709423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.209031 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.209138 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.209530 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:11.709132 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:11.709202 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:11.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:11.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:12.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.209422 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:12.709068 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:12.709177 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:12.709636 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.209220 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.209299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.209571 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:13.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:13.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:13.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:14.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.209025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:14.209433 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:14.708909 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:14.708988 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:14.709306 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.209748 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.209826 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.210152 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:15.708902 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:15.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:15.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.208905 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.208978 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.209278 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:16.708874 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:16.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:16.709267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:16.709311 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:17.208877 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.209356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:17.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:17.708976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:17.709308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.209413 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:18.709157 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:18.709238 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:18.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:18.709645 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:19.209201 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.209269 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.209518 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:19.709485 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:19.709558 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:19.709880 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.209636 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.209974 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:20.709755 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:20.709829 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:20.710090 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:20.710130 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:21.209835 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.209913 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.210224 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:21.708910 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:21.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:21.709338 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.208900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.208981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:22.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:22.709058 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:22.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:23.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.209616 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:23.209677 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:23.709211 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:23.709280 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:23.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.209032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.209394 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:24.708953 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:24.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:24.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.209203 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.209275 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.209580 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:25.709316 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:25.709392 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:25.709705 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:25.709765 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:26.209510 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.209594 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.209928 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:26.709733 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:26.709802 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:26.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.209837 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.209926 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.210235 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:27.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:27.709350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:28.208906 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.208976 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.209251 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:28.209296 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:28.709016 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:28.709092 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:28.709432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.208954 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.209371 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:29.709348 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:29.709421 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:29.709708 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:30.209514 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.209603 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.209930 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:30.209989 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:30.709705 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:30.709782 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:30.710096 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.209823 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.209893 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.210153 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:31.708900 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:31.708982 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:31.709337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.209065 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.209484 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:32.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:32.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:32.709515 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:32.709566 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:33.209221 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.209294 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.209638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:33.709229 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:33.709309 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:33.709638 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.209279 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.209527 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:34.709451 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:34.709526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:34.709824 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:34.709870 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:35.209712 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.209801 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.210156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:35.709774 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:35.709847 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:35.710101 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.208847 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.208924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.209266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:36.708961 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:36.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:36.709411 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:37.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.208992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.209261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:37.209303 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:37.708946 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:37.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:37.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.208945 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.209345 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:38.709003 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:38.709091 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:38.709404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:39.209187 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.209262 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.209613 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:39.209672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:39.709433 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:39.709508 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:39.709838 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.209598 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.209675 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.210009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:40.709773 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:40.709855 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:40.710189 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.208908 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:41.708924 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:41.708992 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:41.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:41.709318 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:42.209001 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.209093 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.209487 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:42.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:42.709286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:42.709587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.209235 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.209303 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:43.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:43.709313 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:43.709652 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:43.709709 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:44.209469 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.209542 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.209879 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:44.709684 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:44.709755 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:44.710023 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.208845 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.208942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:45.709723 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:45.709804 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:45.710156 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:45.710211 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:46.208872 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.208948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.209249 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:46.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:46.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:46.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.208964 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:47.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:47.709147 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:47.709424 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:48.209095 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.209192 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.209519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:48.209580 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:48.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:48.709017 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:48.709378 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.209077 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.209428 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:49.709414 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:49.709491 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:49.709816 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:50.209655 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.209742 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.210066 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:50.210123 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:50.709861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:50.709937 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:50.710188 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.208878 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.208952 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.209322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:51.708914 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:51.708993 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:51.709324 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.208904 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.208985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.209267 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:52.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:52.709023 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:52.709362 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:52.709420 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:53.208960 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.209038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.209404 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:53.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:53.708997 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:53.709294 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:54.708970 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:54.709054 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:54.709449 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:54.709516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:55.209555 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.209634 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.209938 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:55.709754 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:55.709830 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:55.710148 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.208939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:56.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:56.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:56.709364 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:57.208951 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.209386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:57.209445 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:50:57.708966 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:57.709042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:57.709399 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.209082 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.209168 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.209479 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:58.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:58.709032 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:58.709393 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.208975 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.209052 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:50:59.708894 1844089 type.go:168] "Request Body" body=""
	I1124 09:50:59.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:50:59.709244 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:50:59.709289 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:00.209000 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.209097 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:00.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:00.709025 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:00.709388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.209068 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.209162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.209486 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:01.708917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:01.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:01.709341 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:01.709394 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:02.208943 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.209065 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:02.708872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:02.708947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:02.709229 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:03.208953 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.209028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.210127 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1124 09:51:03.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:03.708939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:03.709302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:04.209006 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.209073 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.209406 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:04.209458 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:04.709392 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:04.709474 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:04.709835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.209403 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.209479 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.209835 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:05.709680 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:05.709766 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:05.710028 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:06.209869 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.209955 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.210295 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:06.210355 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:06.708964 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:06.709046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:06.709408 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.208970 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.209420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:07.708938 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:07.709018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:07.709379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.209150 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.209225 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:08.709219 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:08.709289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:08.709627 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:08.709719 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:09.209522 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.209981 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:09.709768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:09.709843 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:09.710123 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.208867 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.208987 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:10.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:10.709020 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:10.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:11.208990 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.209070 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.209397 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:11.209465 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:11.708886 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:11.708972 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:11.709239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.208935 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.209374 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:12.708976 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:12.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:12.709385 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.208896 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.209256 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:13.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:13.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:13.709344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:13.709391 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:14.208984 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.209391 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:14.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:14.709366 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:14.709615 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.209618 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.209698 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.210033 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:15.709832 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:15.709911 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:15.710236 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:15.710293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:16.208918 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.209286 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:16.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:16.709009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:16.709328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.208937 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.209046 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.209357 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:17.708856 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:17.708924 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:17.709185 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:18.208887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.208963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.209319 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:18.209373 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:18.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:18.709038 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:18.709366 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.208917 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.209000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.209344 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:19.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:19.709241 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:19.709591 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:20.209330 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.209415 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:20.209872 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:20.709638 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:20.709728 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:20.710059 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.209872 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.209964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.210347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:21.709055 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:21.709162 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:21.709523 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.209023 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.209095 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.209382 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:22.708977 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:22.709055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:22.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:22.709477 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:23.209195 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.209283 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.209584 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:23.709228 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:23.709299 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:23.709557 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.208963 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:24.709350 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:24.709431 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:24.709744 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:24.709799 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:25.209704 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.209784 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.210041 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:25.709815 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:25.709891 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:25.710192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.209912 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.209990 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.210312 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:26.708887 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:26.708968 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:26.709274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:27.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:27.209444 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:27.708932 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:27.709011 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:27.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.209045 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.209134 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:28.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:28.709057 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:28.709435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:29.209164 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.209578 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:29.209633 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:29.709432 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:29.709510 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:29.709795 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.209546 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.209624 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.209973 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:30.709624 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:30.709702 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:30.710036 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:31.209789 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.209866 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.210145 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:31.210192 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:31.708857 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:31.708932 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:31.709271 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.208962 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.209409 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:32.708881 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:32.708953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:32.709262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.208947 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.209400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:33.708933 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:33.709012 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:33.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:33.709407 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:34.209051 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.209156 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.209423 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:34.709486 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:34.709578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:34.709969 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.209768 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.209850 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.210220 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:35.708944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:35.709035 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:35.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:36.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.209039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.209372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:36.209421 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:36.709121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:36.709197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:36.709519 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.208947 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.209239 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:37.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:37.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:37.709416 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:38.209176 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.209257 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.209590 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:38.209647 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:38.709163 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:38.709230 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:38.709478 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.208929 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.209009 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.209347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:39.709324 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:39.709397 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:39.709728 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:40.209489 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.209557 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.209830 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:40.209876 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:40.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:40.709707 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:40.710055 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.209685 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.209762 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.210061 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:41.709752 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:41.709828 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:41.710112 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.209047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.209560 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:42.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:42.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:42.709372 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:42.709426 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:43.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.208973 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.209250 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:43.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:43.709026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:43.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.209092 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.209194 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.209587 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:44.709472 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:44.709546 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:44.709820 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:44.709861 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:45.209849 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.209939 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.210268 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:45.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:45.709006 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:45.709307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.208967 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.209264 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:46.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:46.709059 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:46.709403 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:47.209142 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.209219 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.209569 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:47.209622 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:47.709238 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:47.709312 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:47.709563 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.208991 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.209067 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.209412 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:48.709134 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:48.709207 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:48.709500 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.209300 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:49.708955 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:49.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:49.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:49.709409 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:50.209121 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.209206 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.209533 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:50.708888 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:50.708963 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:50.709261 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.209021 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.209129 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.209441 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:51.709189 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:51.709265 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:51.709596 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:51.709649 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:52.209208 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.209290 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.209551 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:52.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:52.709063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:52.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.209080 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.209550 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:53.708907 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:53.708975 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:53.709317 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:54.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.209337 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:54.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:54.708980 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:54.709060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:54.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.209380 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.209452 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.209779 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:55.708975 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:55.709062 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:55.709456 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:56.208966 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.209063 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:56.209437 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:56.709790 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:56.709867 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:56.710121 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.209908 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.209985 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.210307 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:57.708957 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:57.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:57.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:58.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.209185 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:51:58.209485 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:51:58.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:58.709014 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:58.709347 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.209018 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.209363 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:51:59.708916 1844089 type.go:168] "Request Body" body=""
	I1124 09:51:59.708991 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:51:59.709322 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.209018 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.209122 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.209440 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:00.709290 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:00.709370 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:00.709700 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:00.709758 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:01.209455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.209526 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.209787 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:01.709649 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:01.709729 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:01.710058 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.209841 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.209925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.210265 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:02.708883 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:02.708954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:02.709293 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:03.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.209407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:03.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:03.709146 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:03.709228 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:03.709570 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.209212 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.209286 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.209544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:04.709581 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:04.709667 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:04.710009 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.208861 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.208954 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.209320 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:05.708991 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:05.709066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:05.709407 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:05.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:06.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.209040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.209402 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:06.709136 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:06.709218 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:06.709559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.209217 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.209291 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.209612 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:07.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:07.709037 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:07.709400 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:08.209119 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.209197 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:08.209621 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:08.709207 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:08.709292 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:08.709544 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:09.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:09.709036 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:09.709375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.209090 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.209178 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:10.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:10.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:10.709390 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:10.709457 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:11.209191 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.209268 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.209610 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:11.709214 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:11.709285 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:11.709609 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.208967 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.209392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:12.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:12.709039 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:12.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:13.208958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.209029 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.209308 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:13.209350 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:13.709051 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:13.709140 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:13.709483 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.209579 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:14.709549 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:14.709685 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:14.710128 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:15.209213 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.209289 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.209620 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:15.209679 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:15.709455 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:15.709531 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:15.709878 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.209651 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.209725 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.209983 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:16.709769 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:16.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:16.710195 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.208949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.209033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.209379 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:17.708911 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:17.708998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:17.709361 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:17.709412 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:18.208968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.209045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.209388 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:18.708952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:18.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:18.709373 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.208939 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.209010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.209302 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:19.709272 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:19.709356 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:19.709668 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:19.709724 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:20.209502 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.209578 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.209951 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:20.709782 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:20.709853 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:20.710102 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.209876 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.209953 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.210310 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:21.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:21.708981 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:21.709321 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:22.208895 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.208966 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.209252 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:22.209293 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:22.709034 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:22.709136 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:22.709493 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.208946 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.209022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.209350 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:23.708905 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:23.708983 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:23.709272 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:24.208934 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.209013 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:24.209428 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:24.708949 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:24.709030 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:24.709396 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.209216 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.209293 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.209546 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:25.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:25.709028 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:25.709353 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:26.209056 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.209172 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.209458 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:26.209508 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:26.708880 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:26.708948 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:26.709291 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.208965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.209042 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:27.709023 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:27.709120 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:27.709438 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.209060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.209160 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.209432 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:28.708981 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:28.709061 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:28.709386 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:28.709443 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:29.209162 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.209244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.209559 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:29.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:29.709568 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:29.709818 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.209668 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.209750 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.210098 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:30.708867 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:30.708942 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:30.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:31.208898 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.208986 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.209328 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:31.209386 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:31.708968 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:31.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:31.709377 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.208950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.209024 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.209395 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:32.709072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:32.709157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:32.709415 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:33.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:33.209513 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:33.709035 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:33.709137 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:33.709462 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.208893 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.208964 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.209274 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:34.709168 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:34.709244 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:34.709586 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:35.209409 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.209492 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.209807 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:35.209852 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:35.709526 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:35.709597 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:35.709869 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.209633 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.209708 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.210043 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:36.709850 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:36.709925 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:36.710262 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.209021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.209297 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:37.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:37.709022 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:37.709384 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:37.709440 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:38.208987 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.209069 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.209435 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:38.708972 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:38.709041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:38.709315 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.208978 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.209055 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:39.709295 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:39.709373 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:39.709697 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:39.709756 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:40.209475 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.209550 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.209908 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:40.709677 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:40.709752 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:40.710115 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.209759 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.209835 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.210192 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:41.708889 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:41.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:41.709284 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:42.208977 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.209060 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.209455 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:42.209516 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:42.709031 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:42.709125 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:42.709477 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.208925 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.208998 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.209288 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:43.708960 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:43.709040 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:43.709342 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.209073 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.209164 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.209444 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:44.709305 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:44.709379 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:44.709632 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:44.709672 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:45.209865 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.210034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.211000 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:45.708958 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:45.709034 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:45.709376 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.209072 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.209157 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.209473 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:46.708965 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:46.709047 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:46.709360 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:47.208989 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.209066 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.209434 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:47.209489 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:47.709149 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:47.709220 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:47.709470 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.208944 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.209367 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:48.708950 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:48.709033 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:48.709392 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.208882 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.208956 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.209248 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:49.708923 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:49.708996 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:49.709346 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:49.709401 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:50.208932 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.209015 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.209369 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:50.709053 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:50.709142 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:50.709429 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.209160 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.209242 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.209581 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:51.709273 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:51.709351 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:51.709670 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:51.709725 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:52.209462 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.209549 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.209889 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:52.709740 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:52.709823 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:52.710180 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.208924 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.209005 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.209352 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:53.709060 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:53.709149 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:53.709405 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:54.208959 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.209031 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.209410 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:54.209462 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:54.708969 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:54.709044 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:54.709387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.209314 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.209382 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.209635 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:55.708948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:55.709021 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:55.709370 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:56.209074 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.209181 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.209509 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:56.209569 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:56.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:56.708958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:56.709266 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.209389 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:57.709098 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:57.709195 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:57.709513 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.208874 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.208958 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.209260 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:58.708954 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:58.709045 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:58.709420 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:52:58.709478 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:52:59.208948 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.209041 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.209387 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:52:59.708890 1844089 type.go:168] "Request Body" body=""
	I1124 09:52:59.708969 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:52:59.709259 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.209170 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.209819 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:00.709617 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:00.709688 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:00.710034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:00.710088 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:01.209699 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.209777 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.210034 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:01.709784 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:01.709858 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:01.710223 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.209882 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.209960 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.210301 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:02.708903 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:02.708970 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:02.709275 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:03.208952 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.209026 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.209375 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:03.209442 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:03.708930 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:03.709010 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:03.709356 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.209039 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.209135 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.209531 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:04.709494 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:04.709573 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:04.709992 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:05.209202 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.209287 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.209886 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1124 09:53:05.209937 1844089 node_ready.go:55] error getting node "functional-373432" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-373432": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 09:53:05.708919 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:05.709000 1844089 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-373432" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1124 09:53:05.709355 1844089 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1124 09:53:06.209058 1844089 type.go:168] "Request Body" body=""
	I1124 09:53:06.209140 1844089 node_ready.go:38] duration metric: took 6m0.000414768s for node "functional-373432" to be "Ready" ...
	I1124 09:53:06.212349 1844089 out.go:203] 
	W1124 09:53:06.215554 1844089 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1124 09:53:06.215587 1844089 out.go:285] * 
	W1124 09:53:06.217723 1844089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 09:53:06.220637 1844089 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.817550161Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7235f97c-291f-4621-a834-368f0380908f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.844023057Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=07b8cd5a-b881-4bbe-a717-727d93ea6d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.84418017Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=07b8cd5a-b881-4bbe-a717-727d93ea6d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:14 functional-373432 crio[6244]: time="2025-11-24T09:53:14.844235974Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=07b8cd5a-b881-4bbe-a717-727d93ea6d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.9320376Z" level=info msg="Checking image status: minikube-local-cache-test:functional-373432" id=a4ed308e-d514-45fd-a3fc-8a74361d8993 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.961735471Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-373432" id=045576c9-388a-412c-91fa-df1dc91ddbf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.961910226Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-373432 not found" id=045576c9-388a-412c-91fa-df1dc91ddbf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.961964167Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-373432 found" id=045576c9-388a-412c-91fa-df1dc91ddbf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.987746256Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-373432" id=9d1a18c1-6b22-4a5b-8a9e-2ba1a06e3d01 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.987899177Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-373432 not found" id=9d1a18c1-6b22-4a5b-8a9e-2ba1a06e3d01 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:15 functional-373432 crio[6244]: time="2025-11-24T09:53:15.987949606Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-373432 found" id=9d1a18c1-6b22-4a5b-8a9e-2ba1a06e3d01 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:16 functional-373432 crio[6244]: time="2025-11-24T09:53:16.791685967Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d99223e9-26ed-44d8-ad07-c98ffe6b880a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.129751526Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=96077913-9966-483f-9e4d-73605b805e23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.129892295Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=96077913-9966-483f-9e4d-73605b805e23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.129931442Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=96077913-9966-483f-9e4d-73605b805e23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.804659087Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=15984ffe-bca9-48a6-a98a-61eb97b3a11a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.804783511Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=15984ffe-bca9-48a6-a98a-61eb97b3a11a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.804819852Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=15984ffe-bca9-48a6-a98a-61eb97b3a11a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.836659051Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=674e2ead-5287-4d08-8e7b-3cedc6986504 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.836782417Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=674e2ead-5287-4d08-8e7b-3cedc6986504 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.836818766Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=674e2ead-5287-4d08-8e7b-3cedc6986504 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.862705529Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f9b45855-6480-4613-bf87-7678688fe267 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.862838092Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f9b45855-6480-4613-bf87-7678688fe267 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:17 functional-373432 crio[6244]: time="2025-11-24T09:53:17.862874335Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f9b45855-6480-4613-bf87-7678688fe267 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:53:18 functional-373432 crio[6244]: time="2025-11-24T09:53:18.393780554Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=39982433-4391-44e8-bf72-9d67ba5887f9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:53:22.366440   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:22.366955   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:22.368627   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:22.369077   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:53:22.370778   10337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:53:22 up  8:35,  0 user,  load average: 0.52, 0.28, 0.56
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 09:53:20 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:20 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1159.
	Nov 24 09:53:20 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:20 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:20 functional-373432 kubelet[10213]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:20 functional-373432 kubelet[10213]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:20 functional-373432 kubelet[10213]: E1124 09:53:20.776544   10213 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:20 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:20 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:21 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1160.
	Nov 24 09:53:21 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:21 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:21 functional-373432 kubelet[10248]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:21 functional-373432 kubelet[10248]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:21 functional-373432 kubelet[10248]: E1124 09:53:21.536320   10248 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:21 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:21 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 09:53:22 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1161.
	Nov 24 09:53:22 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:22 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 09:53:22 functional-373432 kubelet[10315]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:22 functional-373432 kubelet[10315]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 09:53:22 functional-373432 kubelet[10315]: E1124 09:53:22.272293   10315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 09:53:22 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 09:53:22 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (380.253583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (737.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 09:53:39.922588 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:55:36.849264 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:57:54.299906 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:59:17.366698 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:00:36.851377 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:02:54.305312 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:05:36.850275 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-373432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.882092023s)

                                                
                                                
-- stdout --
	* [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242606s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-373432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.883321487s for "functional-373432" cluster.
I1124 10:05:38.234173 1806704 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (306.830748ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 logs -n 25: (1.000405986s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image ls                                                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ delete         │ -p functional-498341                                                                                                                              │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │ 24 Nov 25 09:38 UTC │
	│ start          │ -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │                     │
	│ start          │ -p functional-373432 --alsologtostderr -v=8                                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:latest                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add minikube-local-cache-test:functional-373432                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache delete minikube-local-cache-test:functional-373432                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl images                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ cache          │ functional-373432 cache reload                                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ kubectl        │ functional-373432 kubectl -- --context functional-373432 get pods                                                                                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ start          │ -p functional-373432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:53:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:53:23.394373 1849924 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:53:23.394473 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394476 1849924 out.go:374] Setting ErrFile to fd 2...
	I1124 09:53:23.394480 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394868 1849924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:53:23.395314 1849924 out.go:368] Setting JSON to false
	I1124 09:53:23.396438 1849924 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30954,"bootTime":1763947050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:53:23.396523 1849924 start.go:143] virtualization:  
	I1124 09:53:23.399850 1849924 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:53:23.403618 1849924 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:53:23.403698 1849924 notify.go:221] Checking for updates...
	I1124 09:53:23.409546 1849924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:53:23.412497 1849924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:53:23.415264 1849924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:53:23.418109 1849924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:53:23.420908 1849924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:53:23.424158 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:23.424263 1849924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:53:23.449398 1849924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:53:23.449524 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.505939 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.496540271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.506033 1849924 docker.go:319] overlay module found
	I1124 09:53:23.509224 1849924 out.go:179] * Using the docker driver based on existing profile
	I1124 09:53:23.512245 1849924 start.go:309] selected driver: docker
	I1124 09:53:23.512255 1849924 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.512340 1849924 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:53:23.512454 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.568317 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.558792888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.568738 1849924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:53:23.568763 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:23.568821 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:23.568862 1849924 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.571988 1849924 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:53:23.574929 1849924 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:53:23.577959 1849924 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:53:23.580671 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:23.580735 1849924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:53:23.600479 1849924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:53:23.600490 1849924 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:53:23.634350 1849924 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:53:24.054820 1849924 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:53:24.054990 1849924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:53:24.055122 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.055240 1849924 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:53:24.055269 1849924 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.055313 1849924 start.go:364] duration metric: took 27.192µs to acquireMachinesLock for "functional-373432"
	I1124 09:53:24.055327 1849924 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:53:24.055331 1849924 fix.go:54] fixHost starting: 
	I1124 09:53:24.055580 1849924 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:53:24.072844 1849924 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:53:24.072865 1849924 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:53:24.076050 1849924 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:53:24.076079 1849924 machine.go:94] provisionDockerMachine start ...
	I1124 09:53:24.076162 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.100870 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.101221 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.101228 1849924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:53:24.232623 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.252893 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.252907 1849924 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:53:24.252988 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.280057 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.280362 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.280376 1849924 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:53:24.402975 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.467980 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.468079 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.499770 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.500067 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.500084 1849924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:53:24.556663 1849924 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556759 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:53:24.556767 1849924 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.133µs
	I1124 09:53:24.556774 1849924 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:53:24.556785 1849924 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556814 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:53:24.556818 1849924 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.266µs
	I1124 09:53:24.556823 1849924 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556832 1849924 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556867 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:53:24.556871 1849924 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 39.738µs
	I1124 09:53:24.556876 1849924 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556884 1849924 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556911 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:53:24.556915 1849924 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.655µs
	I1124 09:53:24.556920 1849924 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556934 1849924 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556959 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:53:24.556963 1849924 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 35.948µs
	I1124 09:53:24.556967 1849924 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556975 1849924 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556999 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:53:24.557011 1849924 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.226µs
	I1124 09:53:24.557015 1849924 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:53:24.557023 1849924 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557048 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:53:24.557051 1849924 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 29.202µs
	I1124 09:53:24.557056 1849924 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:53:24.557065 1849924 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557089 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:53:24.557093 1849924 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.258µs
	I1124 09:53:24.557097 1849924 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:53:24.557129 1849924 cache.go:87] Successfully saved all images to host disk.
	I1124 09:53:24.653937 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:53:24.653952 1849924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:53:24.653984 1849924 ubuntu.go:190] setting up certificates
	I1124 09:53:24.653993 1849924 provision.go:84] configureAuth start
	I1124 09:53:24.654058 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:24.671316 1849924 provision.go:143] copyHostCerts
	I1124 09:53:24.671391 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:53:24.671399 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:53:24.671473 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:53:24.671573 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:53:24.671577 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:53:24.671611 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:53:24.671659 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:53:24.671662 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:53:24.671684 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:53:24.671727 1849924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:53:25.074688 1849924 provision.go:177] copyRemoteCerts
	I1124 09:53:25.074752 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:53:25.074789 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.095886 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.200905 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:53:25.221330 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:53:25.243399 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:53:25.263746 1849924 provision.go:87] duration metric: took 609.720286ms to configureAuth
	I1124 09:53:25.263762 1849924 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:53:25.263945 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:25.264045 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.283450 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:25.283754 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:25.283770 1849924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:53:25.632249 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:53:25.632261 1849924 machine.go:97] duration metric: took 1.556176004s to provisionDockerMachine
	I1124 09:53:25.632272 1849924 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:53:25.632283 1849924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:53:25.632368 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:53:25.632405 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.650974 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.756910 1849924 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:53:25.760285 1849924 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:53:25.760302 1849924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:53:25.760312 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:53:25.760370 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:53:25.760445 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:53:25.760518 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:53:25.760561 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:53:25.767953 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:25.785397 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:53:25.802531 1849924 start.go:296] duration metric: took 170.24573ms for postStartSetup
	I1124 09:53:25.802613 1849924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:53:25.802665 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.819451 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.922232 1849924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:53:25.926996 1849924 fix.go:56] duration metric: took 1.871657348s for fixHost
	I1124 09:53:25.927011 1849924 start.go:83] releasing machines lock for "functional-373432", held for 1.871691088s
	I1124 09:53:25.927085 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:25.943658 1849924 ssh_runner.go:195] Run: cat /version.json
	I1124 09:53:25.943696 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.943958 1849924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:53:25.944002 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.980808 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.985182 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:26.175736 1849924 ssh_runner.go:195] Run: systemctl --version
	I1124 09:53:26.181965 1849924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:53:26.217601 1849924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:53:26.221860 1849924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:53:26.221923 1849924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:53:26.229857 1849924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:53:26.229870 1849924 start.go:496] detecting cgroup driver to use...
	I1124 09:53:26.229899 1849924 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:53:26.229945 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:53:26.244830 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:53:26.257783 1849924 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:53:26.257835 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:53:26.273202 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:53:26.286089 1849924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:53:26.392939 1849924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:53:26.505658 1849924 docker.go:234] disabling docker service ...
	I1124 09:53:26.505717 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:53:26.520682 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:53:26.533901 1849924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:53:26.643565 1849924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:53:26.781643 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:53:26.794102 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:53:26.807594 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:26.964951 1849924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:53:26.965014 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.974189 1849924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:53:26.974248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.982757 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.991310 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.000248 1849924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:53:27.009837 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.019258 1849924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.028248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.037276 1849924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:53:27.045218 1849924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:53:27.052631 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:27.162722 1849924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:53:27.344834 1849924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:53:27.344893 1849924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:53:27.348791 1849924 start.go:564] Will wait 60s for crictl version
	I1124 09:53:27.348847 1849924 ssh_runner.go:195] Run: which crictl
	I1124 09:53:27.352314 1849924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:53:27.376797 1849924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:53:27.376884 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.404280 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.437171 1849924 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:53:27.439969 1849924 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:53:27.457621 1849924 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:53:27.466585 1849924 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 09:53:27.469312 1849924 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:53:27.469546 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.636904 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.787069 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.940573 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:27.940635 1849924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:53:27.974420 1849924 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:53:27.974431 1849924 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:53:27.974436 1849924 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:53:27.974527 1849924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:53:27.974612 1849924 ssh_runner.go:195] Run: crio config
	I1124 09:53:28.037679 1849924 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 09:53:28.037700 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:28.037709 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:28.037724 1849924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:53:28.037750 1849924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:53:28.037877 1849924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:53:28.037948 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:53:28.045873 1849924 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:53:28.045941 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:53:28.053444 1849924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:53:28.066325 1849924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:53:28.079790 1849924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1124 09:53:28.092701 1849924 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:53:28.096834 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:28.213078 1849924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:53:28.235943 1849924 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:53:28.235953 1849924 certs.go:195] generating shared ca certs ...
	I1124 09:53:28.235988 1849924 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:53:28.236165 1849924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:53:28.236216 1849924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:53:28.236222 1849924 certs.go:257] generating profile certs ...
	I1124 09:53:28.236320 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:53:28.236381 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:53:28.236430 1849924 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:53:28.236545 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:53:28.236581 1849924 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:53:28.236590 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:53:28.236617 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:53:28.236639 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:53:28.236676 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:53:28.236733 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:28.237452 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:53:28.267491 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:53:28.288261 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:53:28.304655 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:53:28.321607 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:53:28.339914 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:53:28.357697 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:53:28.374827 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:53:28.392170 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:53:28.410757 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:53:28.428776 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:53:28.446790 1849924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:53:28.459992 1849924 ssh_runner.go:195] Run: openssl version
	I1124 09:53:28.466084 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:53:28.474433 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478225 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478282 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.521415 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:53:28.529784 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:53:28.538178 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542108 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542164 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.583128 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:53:28.591113 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:53:28.599457 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603413 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603474 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.645543 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:53:28.653724 1849924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:53:28.657603 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:53:28.698734 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:53:28.739586 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:53:28.780289 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:53:28.820840 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:53:28.861343 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:53:28.902087 1849924 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:28.902167 1849924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:53:28.902236 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.929454 1849924 cri.go:89] found id: ""
	I1124 09:53:28.929519 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:53:28.937203 1849924 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:53:28.937213 1849924 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:53:28.937261 1849924 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:53:28.944668 1849924 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:28.945209 1849924 kubeconfig.go:125] found "functional-373432" server: "https://192.168.49.2:8441"
	I1124 09:53:28.946554 1849924 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:53:28.956044 1849924 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 09:38:48.454819060 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 09:53:28.085978644 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 09:53:28.956053 1849924 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:53:28.956064 1849924 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:53:28.956128 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.991786 1849924 cri.go:89] found id: ""
	I1124 09:53:28.991878 1849924 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:53:29.009992 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:53:29.018335 1849924 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 09:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 09:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Nov 24 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 24 09:42 /etc/kubernetes/scheduler.conf
	
	I1124 09:53:29.018393 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:53:29.026350 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:53:29.034215 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.034271 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:53:29.042061 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.049959 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.050015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.057477 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:53:29.065397 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.065453 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:53:29.072838 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:53:29.080812 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:29.126682 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:30.915283 1849924 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.788534288s)
	I1124 09:53:30.915375 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.124806 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.187302 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.234732 1849924 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:53:31.234802 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:31.735292 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.235922 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.735385 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.235894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.734984 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.235509 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.735644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.235724 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.235151 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.734994 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.235505 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.734925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.235891 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.235854 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.235929 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.734921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.234991 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.235015 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.734874 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.235403 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.734996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.235058 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.735496 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.235113 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.735894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.234930 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.735636 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.234914 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.734875 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.235656 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.735578 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.235469 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.735823 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.235926 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.235524 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.735679 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.235407 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.735614 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.235868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.734868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.235806 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.735801 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.235315 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.735919 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.735842 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.235491 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.235122 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.735029 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.235002 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.735695 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.236092 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.735024 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.235917 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.735341 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.235291 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.735026 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.235183 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.735898 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.235334 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.234896 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.735246 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.235531 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.235579 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.735599 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.234953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.734946 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.235705 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.735908 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.234909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.735831 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.235563 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.735909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.234992 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.735855 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.234936 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.734993 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.235585 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.235013 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.735371 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.235016 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.735593 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.735653 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.235793 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.734939 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.235317 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.735001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.235075 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.234969 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.735715 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.234859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.735010 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.235545 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.735305 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.235127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.734989 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.235601 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.734933 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.234986 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.735250 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.235727 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.734976 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.235644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.735675 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.735127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:31.234921 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:31.235007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:31.266239 1849924 cri.go:89] found id: ""
	I1124 09:54:31.266252 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.266259 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:31.266265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:31.266323 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:31.294586 1849924 cri.go:89] found id: ""
	I1124 09:54:31.294608 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.294616 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:31.294623 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:31.294694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:31.322061 1849924 cri.go:89] found id: ""
	I1124 09:54:31.322076 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.322083 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:31.322088 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:31.322159 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:31.349139 1849924 cri.go:89] found id: ""
	I1124 09:54:31.349154 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.349161 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:31.349167 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:31.349230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:31.379824 1849924 cri.go:89] found id: ""
	I1124 09:54:31.379838 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.379845 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:31.379850 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:31.379915 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:31.407206 1849924 cri.go:89] found id: ""
	I1124 09:54:31.407220 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.407228 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:31.407233 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:31.407296 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:31.435102 1849924 cri.go:89] found id: ""
	I1124 09:54:31.435117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.435123 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:31.435132 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:31.435143 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:31.504759 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:31.504779 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:31.520567 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:31.520584 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:31.587634 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:31.587666 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:31.587680 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:31.665843 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:31.665864 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.199426 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:34.210826 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:34.210886 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:34.249730 1849924 cri.go:89] found id: ""
	I1124 09:54:34.249743 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.249769 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:34.249774 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:34.249844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:34.279157 1849924 cri.go:89] found id: ""
	I1124 09:54:34.279171 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.279178 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:34.279183 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:34.279253 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:34.305617 1849924 cri.go:89] found id: ""
	I1124 09:54:34.305631 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.305655 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:34.305661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:34.305730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:34.331221 1849924 cri.go:89] found id: ""
	I1124 09:54:34.331235 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.331243 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:34.331249 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:34.331309 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:34.357361 1849924 cri.go:89] found id: ""
	I1124 09:54:34.357374 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.357381 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:34.357387 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:34.357447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:34.382790 1849924 cri.go:89] found id: ""
	I1124 09:54:34.382805 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.382812 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:34.382817 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:34.382882 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:34.408622 1849924 cri.go:89] found id: ""
	I1124 09:54:34.408635 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.408653 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:34.408661 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:34.408673 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:34.473355 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:34.473365 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:34.473376 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:34.560903 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:34.560924 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.589722 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:34.589738 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:34.659382 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:34.659407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.175501 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:37.187020 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:37.187082 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:37.215497 1849924 cri.go:89] found id: ""
	I1124 09:54:37.215511 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.215518 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:37.215524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:37.215584 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:37.252296 1849924 cri.go:89] found id: ""
	I1124 09:54:37.252310 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.252317 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:37.252323 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:37.252383 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:37.281216 1849924 cri.go:89] found id: ""
	I1124 09:54:37.281230 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.281237 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:37.281242 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:37.281302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:37.307335 1849924 cri.go:89] found id: ""
	I1124 09:54:37.307349 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.307356 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:37.307361 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:37.307435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:37.333186 1849924 cri.go:89] found id: ""
	I1124 09:54:37.333209 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.333217 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:37.333222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:37.333290 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:37.358046 1849924 cri.go:89] found id: ""
	I1124 09:54:37.358060 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.358068 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:37.358074 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:37.358130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:37.388252 1849924 cri.go:89] found id: ""
	I1124 09:54:37.388265 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.388273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:37.388280 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:37.388291 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:37.423715 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:37.423740 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:37.490800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:37.490819 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.506370 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:37.506387 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:37.571587 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:37.571597 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:37.571608 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.152603 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:40.164138 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:40.164210 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:40.192566 1849924 cri.go:89] found id: ""
	I1124 09:54:40.192581 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.192589 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:40.192594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:40.192677 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:40.233587 1849924 cri.go:89] found id: ""
	I1124 09:54:40.233616 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.233623 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:40.233628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:40.233702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:40.268152 1849924 cri.go:89] found id: ""
	I1124 09:54:40.268166 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.268173 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:40.268178 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:40.268258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:40.297572 1849924 cri.go:89] found id: ""
	I1124 09:54:40.297586 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.297593 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:40.297605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:40.297666 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:40.328480 1849924 cri.go:89] found id: ""
	I1124 09:54:40.328502 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.328511 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:40.328517 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:40.328583 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:40.354088 1849924 cri.go:89] found id: ""
	I1124 09:54:40.354102 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.354108 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:40.354114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:40.354172 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:40.384758 1849924 cri.go:89] found id: ""
	I1124 09:54:40.384772 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.384779 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:40.384786 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:40.384797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:40.452137 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:40.452157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:40.467741 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:40.467757 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:40.535224 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:40.535235 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:40.535246 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.615981 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:40.616005 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:43.148076 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:43.158106 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:43.158169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:43.182985 1849924 cri.go:89] found id: ""
	I1124 09:54:43.182999 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.183006 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:43.183012 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:43.183068 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:43.215806 1849924 cri.go:89] found id: ""
	I1124 09:54:43.215820 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.215837 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:43.215844 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:43.215903 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:43.244278 1849924 cri.go:89] found id: ""
	I1124 09:54:43.244301 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.244309 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:43.244314 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:43.244385 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:43.272908 1849924 cri.go:89] found id: ""
	I1124 09:54:43.272931 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.272938 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:43.272949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:43.273029 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:43.297907 1849924 cri.go:89] found id: ""
	I1124 09:54:43.297921 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.297927 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:43.297933 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:43.298008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:43.330376 1849924 cri.go:89] found id: ""
	I1124 09:54:43.330391 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.330397 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:43.330403 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:43.330459 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:43.359850 1849924 cri.go:89] found id: ""
	I1124 09:54:43.359864 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.359871 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:43.359879 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:43.359898 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:43.426992 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:43.427012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:43.441799 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:43.441816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:43.504072 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:43.504082 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:43.504093 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:43.585362 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:43.585390 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.114191 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:46.124223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:46.124285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:46.151013 1849924 cri.go:89] found id: ""
	I1124 09:54:46.151027 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.151034 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:46.151039 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:46.151096 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:46.177170 1849924 cri.go:89] found id: ""
	I1124 09:54:46.177184 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.177191 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:46.177196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:46.177258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:46.205800 1849924 cri.go:89] found id: ""
	I1124 09:54:46.205814 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.205822 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:46.205828 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:46.205893 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:46.239665 1849924 cri.go:89] found id: ""
	I1124 09:54:46.239689 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.239697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:46.239702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:46.239782 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:46.274455 1849924 cri.go:89] found id: ""
	I1124 09:54:46.274480 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.274488 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:46.274494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:46.274574 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:46.300659 1849924 cri.go:89] found id: ""
	I1124 09:54:46.300673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.300680 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:46.300686 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:46.300760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:46.326694 1849924 cri.go:89] found id: ""
	I1124 09:54:46.326708 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.326715 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:46.326723 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:46.326735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:46.389430 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:46.389441 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:46.389452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:46.467187 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:46.467207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.499873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:46.499889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:46.574600 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:46.574626 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.092671 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:49.102878 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:49.102942 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:49.130409 1849924 cri.go:89] found id: ""
	I1124 09:54:49.130431 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.130439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:49.130445 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:49.130508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:49.156861 1849924 cri.go:89] found id: ""
	I1124 09:54:49.156874 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.156891 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:49.156897 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:49.156964 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:49.183346 1849924 cri.go:89] found id: ""
	I1124 09:54:49.183369 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.183376 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:49.183382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:49.183442 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:49.217035 1849924 cri.go:89] found id: ""
	I1124 09:54:49.217049 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.217056 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:49.217062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:49.217146 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:49.245694 1849924 cri.go:89] found id: ""
	I1124 09:54:49.245713 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.245720 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:49.245726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:49.245891 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:49.284969 1849924 cri.go:89] found id: ""
	I1124 09:54:49.284983 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.284990 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:49.284995 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:49.285055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:49.314521 1849924 cri.go:89] found id: ""
	I1124 09:54:49.314535 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.314542 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:49.314549 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:49.314560 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:49.398958 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:49.398979 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:49.428494 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:49.428511 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:49.497701 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:49.497725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.513336 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:49.513352 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:49.581585 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.081862 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:52.092629 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:52.092692 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:52.124453 1849924 cri.go:89] found id: ""
	I1124 09:54:52.124475 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.124482 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:52.124488 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:52.124546 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:52.151758 1849924 cri.go:89] found id: ""
	I1124 09:54:52.151771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.151778 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:52.151784 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:52.151844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:52.176757 1849924 cri.go:89] found id: ""
	I1124 09:54:52.176771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.176778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:52.176783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:52.176846 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:52.201940 1849924 cri.go:89] found id: ""
	I1124 09:54:52.201954 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.201961 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:52.201967 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:52.202025 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:52.248612 1849924 cri.go:89] found id: ""
	I1124 09:54:52.248625 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.248632 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:52.248638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:52.248713 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:52.279382 1849924 cri.go:89] found id: ""
	I1124 09:54:52.279396 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.279404 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:52.279409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:52.279471 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:52.308695 1849924 cri.go:89] found id: ""
	I1124 09:54:52.308709 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.308717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:52.308724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:52.308735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:52.376027 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:52.376050 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:52.391327 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:52.391343 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:52.459367 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.459377 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:52.459389 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:52.535870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:52.535893 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:55.066284 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:55.077139 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:55.077203 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:55.105400 1849924 cri.go:89] found id: ""
	I1124 09:54:55.105498 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.105506 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:55.105512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:55.105620 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:55.136637 1849924 cri.go:89] found id: ""
	I1124 09:54:55.136651 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.136659 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:55.136664 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:55.136729 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:55.164659 1849924 cri.go:89] found id: ""
	I1124 09:54:55.164673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.164680 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:55.164685 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:55.164749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:55.190091 1849924 cri.go:89] found id: ""
	I1124 09:54:55.190117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.190124 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:55.190129 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:55.190191 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:55.224336 1849924 cri.go:89] found id: ""
	I1124 09:54:55.224351 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.224358 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:55.224363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:55.224424 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:55.259735 1849924 cri.go:89] found id: ""
	I1124 09:54:55.259748 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.259755 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:55.259761 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:55.259821 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:55.290052 1849924 cri.go:89] found id: ""
	I1124 09:54:55.290065 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.290072 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:55.290079 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:55.290090 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:55.355938 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:55.355957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:55.371501 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:55.371518 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:55.437126 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:55.437140 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:55.437152 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:55.515834 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:55.515854 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.048421 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:58.059495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:58.059560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:58.087204 1849924 cri.go:89] found id: ""
	I1124 09:54:58.087219 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.087226 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:58.087232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:58.087292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:58.118248 1849924 cri.go:89] found id: ""
	I1124 09:54:58.118262 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.118270 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:58.118276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:58.118336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:58.144878 1849924 cri.go:89] found id: ""
	I1124 09:54:58.144892 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.144899 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:58.144905 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:58.144963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:58.171781 1849924 cri.go:89] found id: ""
	I1124 09:54:58.171795 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.171814 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:58.171820 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:58.171898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:58.200885 1849924 cri.go:89] found id: ""
	I1124 09:54:58.200907 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.200915 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:58.200920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:58.200993 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:58.231674 1849924 cri.go:89] found id: ""
	I1124 09:54:58.231688 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.231695 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:58.231718 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:58.231792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:58.266664 1849924 cri.go:89] found id: ""
	I1124 09:54:58.266679 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.266686 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:58.266694 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:58.266705 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.300806 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:58.300822 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:58.367929 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:58.367949 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:58.383950 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:58.383967 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:58.449243 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:58.449254 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:58.449279 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:01.029569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:01.040150 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:01.040231 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:01.067942 1849924 cri.go:89] found id: ""
	I1124 09:55:01.067955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.067962 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:01.067968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:01.068031 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:01.095348 1849924 cri.go:89] found id: ""
	I1124 09:55:01.095362 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.095369 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:01.095375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:01.095436 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:01.125781 1849924 cri.go:89] found id: ""
	I1124 09:55:01.125795 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.125803 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:01.125808 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:01.125871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:01.153546 1849924 cri.go:89] found id: ""
	I1124 09:55:01.153561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.153568 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:01.153575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:01.153643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:01.183965 1849924 cri.go:89] found id: ""
	I1124 09:55:01.183980 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.183987 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:01.183993 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:01.184055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:01.218518 1849924 cri.go:89] found id: ""
	I1124 09:55:01.218533 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.218541 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:01.218548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:01.218628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:01.255226 1849924 cri.go:89] found id: ""
	I1124 09:55:01.255241 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.255248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:01.255255 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:01.255266 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:01.290705 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:01.290723 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:01.362275 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:01.362296 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:01.378338 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:01.378357 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:01.447338 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:01.447348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:01.447359 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.029431 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:04.039677 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:04.039753 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:04.064938 1849924 cri.go:89] found id: ""
	I1124 09:55:04.064952 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.064968 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:04.064975 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:04.065032 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:04.091065 1849924 cri.go:89] found id: ""
	I1124 09:55:04.091079 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.091087 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:04.091092 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:04.091155 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:04.119888 1849924 cri.go:89] found id: ""
	I1124 09:55:04.119902 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.119910 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:04.119915 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:04.119990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:04.145893 1849924 cri.go:89] found id: ""
	I1124 09:55:04.145907 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.145914 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:04.145920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:04.145981 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:04.172668 1849924 cri.go:89] found id: ""
	I1124 09:55:04.172682 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.172689 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:04.172695 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:04.172770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:04.199546 1849924 cri.go:89] found id: ""
	I1124 09:55:04.199559 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.199576 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:04.199582 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:04.199654 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:04.233837 1849924 cri.go:89] found id: ""
	I1124 09:55:04.233850 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.233857 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:04.233865 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:04.233875 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:04.312846 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:04.312868 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:04.328376 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:04.328393 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:04.392893 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:04.392903 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:04.392914 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.474469 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:04.474497 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.002775 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:07.014668 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:07.014734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:07.041533 1849924 cri.go:89] found id: ""
	I1124 09:55:07.041549 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.041556 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:07.041563 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:07.041628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:07.071414 1849924 cri.go:89] found id: ""
	I1124 09:55:07.071429 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.071436 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:07.071442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:07.071500 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:07.102622 1849924 cri.go:89] found id: ""
	I1124 09:55:07.102637 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.102644 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:07.102650 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:07.102708 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:07.127684 1849924 cri.go:89] found id: ""
	I1124 09:55:07.127713 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.127720 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:07.127726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:07.127792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:07.153696 1849924 cri.go:89] found id: ""
	I1124 09:55:07.153710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.153718 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:07.153724 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:07.153785 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:07.186158 1849924 cri.go:89] found id: ""
	I1124 09:55:07.186180 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.186187 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:07.186193 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:07.186252 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:07.217520 1849924 cri.go:89] found id: ""
	I1124 09:55:07.217554 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.217562 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:07.217570 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:07.217580 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.247265 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:07.247288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:07.320517 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:07.320537 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:07.336358 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:07.336373 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:07.403281 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:07.403292 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:07.403302 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:09.981463 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:09.992128 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:09.992195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:10.021174 1849924 cri.go:89] found id: ""
	I1124 09:55:10.021189 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.021197 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:10.021203 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:10.021267 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:10.049180 1849924 cri.go:89] found id: ""
	I1124 09:55:10.049194 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.049202 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:10.049207 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:10.049270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:10.078645 1849924 cri.go:89] found id: ""
	I1124 09:55:10.078660 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.078667 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:10.078673 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:10.078734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:10.106290 1849924 cri.go:89] found id: ""
	I1124 09:55:10.106304 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.106312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:10.106318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:10.106390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:10.133401 1849924 cri.go:89] found id: ""
	I1124 09:55:10.133455 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.133462 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:10.133468 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:10.133544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:10.162805 1849924 cri.go:89] found id: ""
	I1124 09:55:10.162820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.162827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:10.162833 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:10.162890 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:10.189156 1849924 cri.go:89] found id: ""
	I1124 09:55:10.189170 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.189177 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:10.189185 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:10.189206 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:10.280238 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:10.280247 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:10.280258 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:10.359007 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:10.359031 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:10.395999 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:10.396024 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:10.462661 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:10.462683 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:12.979323 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:12.989228 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:12.989300 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:13.016908 1849924 cri.go:89] found id: ""
	I1124 09:55:13.016922 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.016929 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:13.016935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:13.016998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:13.044445 1849924 cri.go:89] found id: ""
	I1124 09:55:13.044467 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.044474 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:13.044480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:13.044547 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:13.070357 1849924 cri.go:89] found id: ""
	I1124 09:55:13.070379 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.070387 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:13.070392 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:13.070461 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:13.098253 1849924 cri.go:89] found id: ""
	I1124 09:55:13.098267 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.098274 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:13.098280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:13.098339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:13.124183 1849924 cri.go:89] found id: ""
	I1124 09:55:13.124196 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.124203 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:13.124209 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:13.124269 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:13.150521 1849924 cri.go:89] found id: ""
	I1124 09:55:13.150536 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.150543 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:13.150549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:13.150619 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:13.181696 1849924 cri.go:89] found id: ""
	I1124 09:55:13.181710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.181717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:13.181724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:13.181735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:13.250758 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:13.250778 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:13.271249 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:13.271264 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:13.332213 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:13.332223 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:13.332235 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:13.409269 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:13.409293 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:15.940893 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:15.951127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:15.951201 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:15.976744 1849924 cri.go:89] found id: ""
	I1124 09:55:15.976767 1849924 logs.go:282] 0 containers: []
	W1124 09:55:15.976774 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:15.976780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:15.976848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:16.005218 1849924 cri.go:89] found id: ""
	I1124 09:55:16.005235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.005245 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:16.005251 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:16.005336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:16.036862 1849924 cri.go:89] found id: ""
	I1124 09:55:16.036888 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.036896 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:16.036902 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:16.036990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:16.063354 1849924 cri.go:89] found id: ""
	I1124 09:55:16.063369 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.063376 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:16.063382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:16.063455 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:16.092197 1849924 cri.go:89] found id: ""
	I1124 09:55:16.092211 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.092218 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:16.092224 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:16.092286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:16.117617 1849924 cri.go:89] found id: ""
	I1124 09:55:16.117631 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.117639 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:16.117644 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:16.117702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:16.143200 1849924 cri.go:89] found id: ""
	I1124 09:55:16.143214 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.143220 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:16.143228 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:16.143239 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:16.171873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:16.171889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:16.247500 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:16.247519 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:16.267064 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:16.267080 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:16.337347 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:16.337357 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:16.337368 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:18.916700 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:18.927603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:18.927697 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:18.958633 1849924 cri.go:89] found id: ""
	I1124 09:55:18.958649 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.958656 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:18.958662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:18.958725 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:18.988567 1849924 cri.go:89] found id: ""
	I1124 09:55:18.988582 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.988589 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:18.988594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:18.988665 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:19.016972 1849924 cri.go:89] found id: ""
	I1124 09:55:19.016986 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.016993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:19.016999 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:19.017058 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:19.042806 1849924 cri.go:89] found id: ""
	I1124 09:55:19.042827 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.042835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:19.042841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:19.042905 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:19.073274 1849924 cri.go:89] found id: ""
	I1124 09:55:19.073288 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.073296 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:19.073301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:19.073368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:19.099687 1849924 cri.go:89] found id: ""
	I1124 09:55:19.099701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.099708 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:19.099714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:19.099780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:19.126512 1849924 cri.go:89] found id: ""
	I1124 09:55:19.126526 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.126532 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:19.126540 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:19.126550 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:19.194410 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:19.194430 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:19.216505 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:19.216527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:19.291566 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:19.291578 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:19.291591 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:19.371192 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:19.371213 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:21.902356 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:21.912405 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:21.912468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:21.937243 1849924 cri.go:89] found id: ""
	I1124 09:55:21.937256 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.937270 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:21.937276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:21.937335 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:21.963054 1849924 cri.go:89] found id: ""
	I1124 09:55:21.963068 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.963075 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:21.963080 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:21.963136 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:21.988695 1849924 cri.go:89] found id: ""
	I1124 09:55:21.988708 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.988715 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:21.988722 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:21.988780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:22.015029 1849924 cri.go:89] found id: ""
	I1124 09:55:22.015043 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.015050 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:22.015056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:22.015117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:22.044828 1849924 cri.go:89] found id: ""
	I1124 09:55:22.044843 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.044851 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:22.044857 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:22.044919 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:22.071875 1849924 cri.go:89] found id: ""
	I1124 09:55:22.071889 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.071897 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:22.071903 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:22.071970 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:22.099237 1849924 cri.go:89] found id: ""
	I1124 09:55:22.099252 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.099259 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:22.099267 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:22.099278 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:22.170156 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:22.170176 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:22.185271 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:22.185288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:22.271963 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:22.271973 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:22.271984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:22.349426 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:22.349447 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:24.878185 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:24.888725 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:24.888800 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:24.915846 1849924 cri.go:89] found id: ""
	I1124 09:55:24.915860 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.915867 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:24.915872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:24.915931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:24.944104 1849924 cri.go:89] found id: ""
	I1124 09:55:24.944118 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.944125 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:24.944131 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:24.944196 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:24.970424 1849924 cri.go:89] found id: ""
	I1124 09:55:24.970438 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.970445 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:24.970450 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:24.970511 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:24.999941 1849924 cri.go:89] found id: ""
	I1124 09:55:24.999955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.999962 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:24.999968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:25.000027 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:25.030682 1849924 cri.go:89] found id: ""
	I1124 09:55:25.030700 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.030707 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:25.030714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:25.030788 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:25.061169 1849924 cri.go:89] found id: ""
	I1124 09:55:25.061183 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.061191 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:25.061196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:25.061262 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:25.092046 1849924 cri.go:89] found id: ""
	I1124 09:55:25.092061 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.092069 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:25.092078 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:25.092089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:25.164204 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:25.164229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:25.180461 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:25.180477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:25.270104 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:25.270114 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:25.270125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:25.349962 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:25.349985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:27.885869 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:27.895923 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:27.895990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:27.923576 1849924 cri.go:89] found id: ""
	I1124 09:55:27.923591 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.923598 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:27.923604 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:27.923660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:27.949384 1849924 cri.go:89] found id: ""
	I1124 09:55:27.949398 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.949405 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:27.949409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:27.949468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:27.974662 1849924 cri.go:89] found id: ""
	I1124 09:55:27.974675 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.974682 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:27.974687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:27.974752 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:28.000014 1849924 cri.go:89] found id: ""
	I1124 09:55:28.000028 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.000035 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:28.000041 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:28.000113 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:28.031383 1849924 cri.go:89] found id: ""
	I1124 09:55:28.031397 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.031404 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:28.031410 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:28.031468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:28.062851 1849924 cri.go:89] found id: ""
	I1124 09:55:28.062872 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.062880 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:28.062886 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:28.062965 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:28.091592 1849924 cri.go:89] found id: ""
	I1124 09:55:28.091608 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.091623 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:28.091633 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:28.091646 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:28.125018 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:28.125035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:28.190729 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:28.190751 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:28.205665 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:28.205681 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:28.285905 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:28.285917 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:28.285927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:30.864245 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:30.876164 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:30.876248 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:30.901572 1849924 cri.go:89] found id: ""
	I1124 09:55:30.901586 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.901593 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:30.901599 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:30.901659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:30.931361 1849924 cri.go:89] found id: ""
	I1124 09:55:30.931374 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.931382 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:30.931388 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:30.931449 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:30.956087 1849924 cri.go:89] found id: ""
	I1124 09:55:30.956101 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.956108 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:30.956114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:30.956174 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:30.981912 1849924 cri.go:89] found id: ""
	I1124 09:55:30.981925 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.981933 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:30.981938 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:30.982013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:31.010764 1849924 cri.go:89] found id: ""
	I1124 09:55:31.010778 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.010804 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:31.010811 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:31.010884 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:31.037094 1849924 cri.go:89] found id: ""
	I1124 09:55:31.037140 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.037146 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:31.037153 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:31.037221 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:31.064060 1849924 cri.go:89] found id: ""
	I1124 09:55:31.064075 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.064092 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:31.064100 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:31.064111 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:31.129432 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:31.129444 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:31.129455 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:31.207603 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:31.207622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:31.246019 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:31.246035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:31.313859 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:31.313882 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:33.829785 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:33.839749 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:33.839813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:33.864810 1849924 cri.go:89] found id: ""
	I1124 09:55:33.864824 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.864831 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:33.864837 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:33.864898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:33.890309 1849924 cri.go:89] found id: ""
	I1124 09:55:33.890324 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.890331 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:33.890336 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:33.890401 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:33.922386 1849924 cri.go:89] found id: ""
	I1124 09:55:33.922399 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.922406 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:33.922412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:33.922473 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:33.947199 1849924 cri.go:89] found id: ""
	I1124 09:55:33.947213 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.947220 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:33.947226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:33.947289 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:33.972195 1849924 cri.go:89] found id: ""
	I1124 09:55:33.972209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.972216 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:33.972222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:33.972294 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:33.997877 1849924 cri.go:89] found id: ""
	I1124 09:55:33.997891 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.997898 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:33.997904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:33.997961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:34.024719 1849924 cri.go:89] found id: ""
	I1124 09:55:34.024733 1849924 logs.go:282] 0 containers: []
	W1124 09:55:34.024741 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:34.024748 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:34.024769 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:34.089874 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:34.089896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:34.104839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:34.104857 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:34.171681 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:34.171691 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:34.171702 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:34.249876 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:34.249896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:36.781512 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:36.791518 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:36.791579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:36.820485 1849924 cri.go:89] found id: ""
	I1124 09:55:36.820500 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.820508 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:36.820514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:36.820589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:36.845963 1849924 cri.go:89] found id: ""
	I1124 09:55:36.845978 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.845985 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:36.845991 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:36.846062 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:36.880558 1849924 cri.go:89] found id: ""
	I1124 09:55:36.880573 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.880580 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:36.880586 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:36.880656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:36.908730 1849924 cri.go:89] found id: ""
	I1124 09:55:36.908745 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.908752 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:36.908769 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:36.908830 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:36.936618 1849924 cri.go:89] found id: ""
	I1124 09:55:36.936634 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.936646 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:36.936662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:36.936724 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:36.961091 1849924 cri.go:89] found id: ""
	I1124 09:55:36.961134 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.961142 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:36.961148 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:36.961215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:36.986263 1849924 cri.go:89] found id: ""
	I1124 09:55:36.986278 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.986285 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:36.986293 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:36.986304 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:37.061090 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:37.061120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:37.076634 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:37.076652 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:37.144407 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:37.144417 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:37.144427 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:37.223887 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:37.223907 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:39.759307 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:39.769265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:39.769325 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:39.795092 1849924 cri.go:89] found id: ""
	I1124 09:55:39.795107 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.795114 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:39.795120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:39.795180 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:39.821381 1849924 cri.go:89] found id: ""
	I1124 09:55:39.821396 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.821403 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:39.821408 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:39.821480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:39.850195 1849924 cri.go:89] found id: ""
	I1124 09:55:39.850209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.850224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:39.850232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:39.850291 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:39.875376 1849924 cri.go:89] found id: ""
	I1124 09:55:39.875391 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.875398 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:39.875404 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:39.875466 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:39.904124 1849924 cri.go:89] found id: ""
	I1124 09:55:39.904138 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.904146 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:39.904151 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:39.904222 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:39.930807 1849924 cri.go:89] found id: ""
	I1124 09:55:39.930820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.930827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:39.930832 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:39.930889 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:39.960435 1849924 cri.go:89] found id: ""
	I1124 09:55:39.960449 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.960456 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:39.960464 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:39.960475 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:40.030261 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:40.030271 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:40.030283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:40.109590 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:40.109615 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:40.143688 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:40.143704 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:40.212394 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:40.212412 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:42.734304 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:42.744432 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:42.744494 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:42.769686 1849924 cri.go:89] found id: ""
	I1124 09:55:42.769701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.769708 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:42.769714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:42.769774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:42.794368 1849924 cri.go:89] found id: ""
	I1124 09:55:42.794381 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.794388 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:42.794394 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:42.794460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:42.819036 1849924 cri.go:89] found id: ""
	I1124 09:55:42.819051 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.819058 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:42.819067 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:42.819126 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:42.845429 1849924 cri.go:89] found id: ""
	I1124 09:55:42.845444 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.845452 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:42.845457 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:42.845516 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:42.873391 1849924 cri.go:89] found id: ""
	I1124 09:55:42.873405 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.873412 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:42.873418 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:42.873483 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:42.899532 1849924 cri.go:89] found id: ""
	I1124 09:55:42.899560 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.899567 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:42.899575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:42.899642 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:42.925159 1849924 cri.go:89] found id: ""
	I1124 09:55:42.925173 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.925180 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:42.925188 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:42.925215 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:43.003079 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:43.003104 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:43.041964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:43.041990 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:43.120202 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:43.120224 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:43.143097 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:43.143191 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:43.219616 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:45.719895 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:45.730306 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:45.730370 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:45.755318 1849924 cri.go:89] found id: ""
	I1124 09:55:45.755333 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.755341 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:45.755353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:45.755413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:45.781283 1849924 cri.go:89] found id: ""
	I1124 09:55:45.781299 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.781305 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:45.781311 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:45.781369 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:45.807468 1849924 cri.go:89] found id: ""
	I1124 09:55:45.807482 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.807489 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:45.807495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:45.807554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:45.836726 1849924 cri.go:89] found id: ""
	I1124 09:55:45.836741 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.836749 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:45.836754 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:45.836813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:45.862613 1849924 cri.go:89] found id: ""
	I1124 09:55:45.862628 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.862635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:45.862641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:45.862702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:45.894972 1849924 cri.go:89] found id: ""
	I1124 09:55:45.894987 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.894994 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:45.895000 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:45.895067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:45.922194 1849924 cri.go:89] found id: ""
	I1124 09:55:45.922209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.922217 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:45.922224 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:45.922237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:45.954912 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:45.954930 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:46.021984 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:46.022004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:46.037849 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:46.037865 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:46.101460 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:46.101473 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:46.101483 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:48.688081 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:48.698194 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:48.698260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:48.724390 1849924 cri.go:89] found id: ""
	I1124 09:55:48.724404 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.724411 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:48.724416 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:48.724480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:48.749323 1849924 cri.go:89] found id: ""
	I1124 09:55:48.749337 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.749344 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:48.749350 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:48.749406 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:48.774542 1849924 cri.go:89] found id: ""
	I1124 09:55:48.774555 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.774562 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:48.774569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:48.774635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:48.799553 1849924 cri.go:89] found id: ""
	I1124 09:55:48.799568 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.799575 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:48.799580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:48.799637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:48.824768 1849924 cri.go:89] found id: ""
	I1124 09:55:48.824782 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.824789 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:48.824794 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:48.824849 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:48.853654 1849924 cri.go:89] found id: ""
	I1124 09:55:48.853668 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.853674 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:48.853680 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:48.853738 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:48.880137 1849924 cri.go:89] found id: ""
	I1124 09:55:48.880151 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.880158 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:48.880166 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:48.880178 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:48.943985 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:48.943998 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:48.944008 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:49.021387 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:49.021407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:49.054551 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:49.054566 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:49.124670 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:49.124690 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.640001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:51.650264 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:51.650326 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:51.675421 1849924 cri.go:89] found id: ""
	I1124 09:55:51.675434 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.675442 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:51.675447 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:51.675510 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:51.703552 1849924 cri.go:89] found id: ""
	I1124 09:55:51.703566 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.703573 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:51.703578 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:51.703637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:51.731457 1849924 cri.go:89] found id: ""
	I1124 09:55:51.731470 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.731477 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:51.731483 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:51.731540 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:51.757515 1849924 cri.go:89] found id: ""
	I1124 09:55:51.757529 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.757536 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:51.757541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:51.757604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:51.787493 1849924 cri.go:89] found id: ""
	I1124 09:55:51.787507 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.787514 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:51.787520 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:51.787579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:51.813153 1849924 cri.go:89] found id: ""
	I1124 09:55:51.813166 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.813173 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:51.813179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:51.813250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:51.845222 1849924 cri.go:89] found id: ""
	I1124 09:55:51.845235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.845244 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:51.845252 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:51.845272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.860214 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:51.860236 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:51.924176 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:51.924186 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:51.924196 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:52.001608 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:52.001629 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:52.037448 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:52.037466 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.609480 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:54.620161 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:54.620223 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:54.649789 1849924 cri.go:89] found id: ""
	I1124 09:55:54.649803 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.649810 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:54.649816 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:54.649879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:54.677548 1849924 cri.go:89] found id: ""
	I1124 09:55:54.677561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.677568 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:54.677573 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:54.677635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:54.707602 1849924 cri.go:89] found id: ""
	I1124 09:55:54.707616 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.707623 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:54.707628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:54.707687 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:54.737369 1849924 cri.go:89] found id: ""
	I1124 09:55:54.737382 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.737390 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:54.737396 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:54.737460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:54.764514 1849924 cri.go:89] found id: ""
	I1124 09:55:54.764528 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.764536 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:54.764541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:54.764599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:54.789898 1849924 cri.go:89] found id: ""
	I1124 09:55:54.789912 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.789920 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:54.789925 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:54.789986 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:54.815652 1849924 cri.go:89] found id: ""
	I1124 09:55:54.815665 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.815672 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:54.815681 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:54.815691 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.882879 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:54.882901 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:54.898593 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:54.898622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:54.967134 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:54.967146 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:54.967157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:55.046870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:55.046891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.578091 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:57.588580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:57.588643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:57.617411 1849924 cri.go:89] found id: ""
	I1124 09:55:57.617425 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.617432 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:57.617437 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:57.617503 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:57.642763 1849924 cri.go:89] found id: ""
	I1124 09:55:57.642777 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.642784 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:57.642789 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:57.642848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:57.668484 1849924 cri.go:89] found id: ""
	I1124 09:55:57.668499 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.668506 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:57.668512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:57.668571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:57.694643 1849924 cri.go:89] found id: ""
	I1124 09:55:57.694657 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.694664 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:57.694670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:57.694730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:57.720049 1849924 cri.go:89] found id: ""
	I1124 09:55:57.720063 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.720070 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:57.720075 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:57.720140 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:57.748016 1849924 cri.go:89] found id: ""
	I1124 09:55:57.748029 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.748036 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:57.748044 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:57.748104 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:57.774253 1849924 cri.go:89] found id: ""
	I1124 09:55:57.774266 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.774273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:57.774281 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:57.774295 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:57.789236 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:57.789253 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:57.851207 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:57.851217 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:57.851229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:57.927927 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:57.927946 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.959058 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:57.959075 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.529440 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:00.539970 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:00.540034 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:00.566556 1849924 cri.go:89] found id: ""
	I1124 09:56:00.566570 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.566583 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:00.566589 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:00.566659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:00.596278 1849924 cri.go:89] found id: ""
	I1124 09:56:00.596291 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.596298 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:00.596304 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:00.596362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:00.623580 1849924 cri.go:89] found id: ""
	I1124 09:56:00.623593 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.623600 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:00.623605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:00.623664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:00.648991 1849924 cri.go:89] found id: ""
	I1124 09:56:00.649006 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.649012 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:00.649018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:00.649078 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:00.676614 1849924 cri.go:89] found id: ""
	I1124 09:56:00.676628 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.676635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:00.676641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:00.676706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:00.701480 1849924 cri.go:89] found id: ""
	I1124 09:56:00.701502 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.701509 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:00.701516 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:00.701575 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:00.727550 1849924 cri.go:89] found id: ""
	I1124 09:56:00.727563 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.727570 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:00.727578 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:00.727589 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:00.755964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:00.755980 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.822018 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:00.822039 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:00.837252 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:00.837268 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:00.901931 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:00.901942 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:00.901957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.481859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:03.493893 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:03.493961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:03.522628 1849924 cri.go:89] found id: ""
	I1124 09:56:03.522643 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.522650 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:03.522656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:03.522716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:03.551454 1849924 cri.go:89] found id: ""
	I1124 09:56:03.551468 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.551475 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:03.551480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:03.551539 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:03.580931 1849924 cri.go:89] found id: ""
	I1124 09:56:03.580945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.580951 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:03.580957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:03.581015 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:03.607826 1849924 cri.go:89] found id: ""
	I1124 09:56:03.607840 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.607846 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:03.607852 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:03.607923 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:03.637843 1849924 cri.go:89] found id: ""
	I1124 09:56:03.637857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.637865 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:03.637870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:03.637931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:03.665156 1849924 cri.go:89] found id: ""
	I1124 09:56:03.665170 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.665176 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:03.665182 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:03.665250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:03.690810 1849924 cri.go:89] found id: ""
	I1124 09:56:03.690824 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.690831 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:03.690839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:03.690849 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:03.755803 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:03.755813 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:03.755823 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.832793 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:03.832816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:03.860351 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:03.860367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:03.930446 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:03.930465 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.445925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:06.457385 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:06.457451 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:06.490931 1849924 cri.go:89] found id: ""
	I1124 09:56:06.490944 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.490951 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:06.490956 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:06.491013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:06.529326 1849924 cri.go:89] found id: ""
	I1124 09:56:06.529340 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.529347 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:06.529353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:06.529409 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:06.554888 1849924 cri.go:89] found id: ""
	I1124 09:56:06.554914 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.554921 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:06.554926 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:06.554984 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:06.579750 1849924 cri.go:89] found id: ""
	I1124 09:56:06.579764 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.579771 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:06.579781 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:06.579839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:06.605075 1849924 cri.go:89] found id: ""
	I1124 09:56:06.605098 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.605134 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:06.605140 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:06.605207 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:06.630281 1849924 cri.go:89] found id: ""
	I1124 09:56:06.630295 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.630302 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:06.630307 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:06.630366 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:06.655406 1849924 cri.go:89] found id: ""
	I1124 09:56:06.655427 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.655435 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:06.655442 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:06.655453 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:06.722316 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:06.722335 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.737174 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:06.737190 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:06.801018 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:06.801032 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:06.801042 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:06.882225 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:06.882254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.412996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:09.423266 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:09.423332 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:09.452270 1849924 cri.go:89] found id: ""
	I1124 09:56:09.452283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.452290 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:09.452295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:09.452353 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:09.484931 1849924 cri.go:89] found id: ""
	I1124 09:56:09.484945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.484952 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:09.484957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:09.485030 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:09.526676 1849924 cri.go:89] found id: ""
	I1124 09:56:09.526689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.526696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:09.526701 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:09.526758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:09.551815 1849924 cri.go:89] found id: ""
	I1124 09:56:09.551828 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.551835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:09.551841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:09.551904 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:09.580143 1849924 cri.go:89] found id: ""
	I1124 09:56:09.580159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.580167 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:09.580173 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:09.580233 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:09.608255 1849924 cri.go:89] found id: ""
	I1124 09:56:09.608269 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.608276 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:09.608281 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:09.608338 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:09.638262 1849924 cri.go:89] found id: ""
	I1124 09:56:09.638276 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.638283 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:09.638291 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:09.638301 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:09.713707 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:09.713728 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.741202 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:09.741218 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:09.806578 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:09.806598 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:09.821839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:09.821855 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:09.888815 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.390494 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:12.400491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:12.400550 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:12.426496 1849924 cri.go:89] found id: ""
	I1124 09:56:12.426511 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.426517 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:12.426524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:12.426587 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:12.457770 1849924 cri.go:89] found id: ""
	I1124 09:56:12.457794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.457801 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:12.457807 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:12.457873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:12.489154 1849924 cri.go:89] found id: ""
	I1124 09:56:12.489167 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.489174 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:12.489179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:12.489250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:12.524997 1849924 cri.go:89] found id: ""
	I1124 09:56:12.525010 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.525018 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:12.525024 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:12.525090 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:12.550538 1849924 cri.go:89] found id: ""
	I1124 09:56:12.550561 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.550569 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:12.550574 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:12.550650 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:12.575990 1849924 cri.go:89] found id: ""
	I1124 09:56:12.576011 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.576018 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:12.576025 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:12.576095 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:12.602083 1849924 cri.go:89] found id: ""
	I1124 09:56:12.602097 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.602104 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:12.602112 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:12.602125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:12.667794 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:12.667814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:12.682815 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:12.682832 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:12.749256 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.749266 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:12.749276 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:12.823882 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:12.823902 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.353890 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:15.364319 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:15.364380 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:15.389759 1849924 cri.go:89] found id: ""
	I1124 09:56:15.389772 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.389786 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:15.389792 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:15.389850 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:15.414921 1849924 cri.go:89] found id: ""
	I1124 09:56:15.414936 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.414943 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:15.414948 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:15.415008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:15.444228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.444242 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.444249 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:15.444254 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:15.444314 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:15.476734 1849924 cri.go:89] found id: ""
	I1124 09:56:15.476747 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.476763 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:15.476768 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:15.476836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:15.507241 1849924 cri.go:89] found id: ""
	I1124 09:56:15.507254 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.507261 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:15.507275 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:15.507339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:15.544058 1849924 cri.go:89] found id: ""
	I1124 09:56:15.544081 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.544089 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:15.544094 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:15.544162 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:15.571228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.571241 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.571248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:15.571261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:15.571272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:15.646647 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:15.646667 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.674311 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:15.674326 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:15.739431 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:15.739451 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:15.754640 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:15.754662 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:15.821471 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.321745 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:18.331603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:18.331664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:18.357195 1849924 cri.go:89] found id: ""
	I1124 09:56:18.357215 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.357223 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:18.357229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:18.357292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:18.387513 1849924 cri.go:89] found id: ""
	I1124 09:56:18.387527 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.387534 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:18.387540 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:18.387600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:18.414561 1849924 cri.go:89] found id: ""
	I1124 09:56:18.414583 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.414590 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:18.414596 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:18.414670 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:18.441543 1849924 cri.go:89] found id: ""
	I1124 09:56:18.441557 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.441564 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:18.441569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:18.441627 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:18.481911 1849924 cri.go:89] found id: ""
	I1124 09:56:18.481924 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.481931 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:18.481937 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:18.481995 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:18.512577 1849924 cri.go:89] found id: ""
	I1124 09:56:18.512589 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.512596 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:18.512601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:18.512660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:18.542006 1849924 cri.go:89] found id: ""
	I1124 09:56:18.542021 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.542028 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:18.542035 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:18.542045 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:18.572217 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:18.572233 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:18.637845 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:18.637863 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:18.653892 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:18.653908 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:18.720870 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.720881 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:18.720891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.300479 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:21.310612 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:21.310716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:21.339787 1849924 cri.go:89] found id: ""
	I1124 09:56:21.339801 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.339808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:21.339819 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:21.339879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:21.364577 1849924 cri.go:89] found id: ""
	I1124 09:56:21.364601 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.364609 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:21.364615 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:21.364688 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:21.391798 1849924 cri.go:89] found id: ""
	I1124 09:56:21.391852 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.391859 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:21.391865 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:21.391939 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:21.417518 1849924 cri.go:89] found id: ""
	I1124 09:56:21.417532 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.417539 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:21.417545 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:21.417600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:21.443079 1849924 cri.go:89] found id: ""
	I1124 09:56:21.443092 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.443099 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:21.443104 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:21.443164 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:21.483649 1849924 cri.go:89] found id: ""
	I1124 09:56:21.483663 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.483685 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:21.483691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:21.483758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:21.513352 1849924 cri.go:89] found id: ""
	I1124 09:56:21.513367 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.513374 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:21.513383 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:21.513445 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:21.583074 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:21.583095 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:21.598415 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:21.598432 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:21.661326 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:21.661336 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:21.661348 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.742506 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:21.742527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:24.271763 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:24.281983 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:24.282044 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:24.313907 1849924 cri.go:89] found id: ""
	I1124 09:56:24.313920 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.313928 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:24.313934 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:24.314006 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:24.338982 1849924 cri.go:89] found id: ""
	I1124 09:56:24.338996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.339003 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:24.339009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:24.339067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:24.365195 1849924 cri.go:89] found id: ""
	I1124 09:56:24.365209 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.365216 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:24.365222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:24.365292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:24.390215 1849924 cri.go:89] found id: ""
	I1124 09:56:24.390228 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.390235 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:24.390241 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:24.390299 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:24.415458 1849924 cri.go:89] found id: ""
	I1124 09:56:24.415472 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.415479 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:24.415484 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:24.415544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:24.442483 1849924 cri.go:89] found id: ""
	I1124 09:56:24.442497 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.442504 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:24.442510 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:24.442571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:24.478898 1849924 cri.go:89] found id: ""
	I1124 09:56:24.478912 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.478919 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:24.478926 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:24.478936 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:24.559295 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:24.559320 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:24.575521 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:24.575538 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:24.643962 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:24.643974 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:24.643985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:24.721863 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:24.721883 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.252684 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:27.262544 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:27.262604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:27.288190 1849924 cri.go:89] found id: ""
	I1124 09:56:27.288203 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.288211 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:27.288216 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:27.288276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:27.315955 1849924 cri.go:89] found id: ""
	I1124 09:56:27.315975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.315983 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:27.315988 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:27.316050 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:27.341613 1849924 cri.go:89] found id: ""
	I1124 09:56:27.341626 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.341633 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:27.341639 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:27.341699 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:27.366677 1849924 cri.go:89] found id: ""
	I1124 09:56:27.366690 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.366697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:27.366703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:27.366768 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:27.392001 1849924 cri.go:89] found id: ""
	I1124 09:56:27.392015 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.392021 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:27.392027 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:27.392085 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:27.419410 1849924 cri.go:89] found id: ""
	I1124 09:56:27.419430 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.419436 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:27.419442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:27.419501 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:27.444780 1849924 cri.go:89] found id: ""
	I1124 09:56:27.444794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.444801 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:27.444809 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:27.444824 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.478836 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:27.478853 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:27.552795 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:27.552814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:27.567935 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:27.567988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:27.630838 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:27.630849 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:27.630859 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:30.212620 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:30.223248 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:30.223313 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:30.249863 1849924 cri.go:89] found id: ""
	I1124 09:56:30.249876 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.249883 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:30.249888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:30.249947 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:30.275941 1849924 cri.go:89] found id: ""
	I1124 09:56:30.275955 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.275974 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:30.275980 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:30.276053 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:30.300914 1849924 cri.go:89] found id: ""
	I1124 09:56:30.300928 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.300944 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:30.300950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:30.301016 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:30.325980 1849924 cri.go:89] found id: ""
	I1124 09:56:30.325994 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.326011 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:30.326018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:30.326089 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:30.352023 1849924 cri.go:89] found id: ""
	I1124 09:56:30.352038 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.352045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:30.352050 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:30.352121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:30.379711 1849924 cri.go:89] found id: ""
	I1124 09:56:30.379724 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.379731 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:30.379736 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:30.379801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:30.409210 1849924 cri.go:89] found id: ""
	I1124 09:56:30.409224 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.409232 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:30.409240 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:30.409251 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:30.437995 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:30.438012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:30.507429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:30.507448 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:30.525911 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:30.525927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:30.589196 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:30.589210 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:30.589220 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:33.172621 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:33.182671 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:33.182730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:33.211695 1849924 cri.go:89] found id: ""
	I1124 09:56:33.211709 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.211716 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:33.211721 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:33.211779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:33.237798 1849924 cri.go:89] found id: ""
	I1124 09:56:33.237811 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.237818 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:33.237824 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:33.237885 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:33.262147 1849924 cri.go:89] found id: ""
	I1124 09:56:33.262160 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.262167 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:33.262172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:33.262230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:33.286667 1849924 cri.go:89] found id: ""
	I1124 09:56:33.286681 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.286690 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:33.286696 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:33.286754 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:33.311109 1849924 cri.go:89] found id: ""
	I1124 09:56:33.311122 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.311129 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:33.311135 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:33.311198 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:33.336757 1849924 cri.go:89] found id: ""
	I1124 09:56:33.336781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.336790 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:33.336796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:33.336864 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:33.365159 1849924 cri.go:89] found id: ""
	I1124 09:56:33.365172 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.365179 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:33.365186 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:33.365197 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:33.393002 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:33.393017 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:33.457704 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:33.457724 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:33.473674 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:33.473700 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:33.547251 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:33.547261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:33.547274 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.125180 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:36.135549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:36.135611 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:36.161892 1849924 cri.go:89] found id: ""
	I1124 09:56:36.161906 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.161913 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:36.161919 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:36.161980 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:36.192254 1849924 cri.go:89] found id: ""
	I1124 09:56:36.192268 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.192275 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:36.192280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:36.192341 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:36.219675 1849924 cri.go:89] found id: ""
	I1124 09:56:36.219689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.219696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:36.219702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:36.219760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:36.249674 1849924 cri.go:89] found id: ""
	I1124 09:56:36.249688 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.249695 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:36.249700 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:36.249756 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:36.276115 1849924 cri.go:89] found id: ""
	I1124 09:56:36.276129 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.276136 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:36.276141 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:36.276199 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:36.303472 1849924 cri.go:89] found id: ""
	I1124 09:56:36.303486 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.303494 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:36.303499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:36.303558 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:36.332774 1849924 cri.go:89] found id: ""
	I1124 09:56:36.332789 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.332796 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:36.332804 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:36.332814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.410262 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:36.410282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:36.442608 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:36.442625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:36.517228 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:36.517247 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:36.532442 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:36.532459 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:36.598941 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.099623 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:39.110286 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:39.110347 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:39.135094 1849924 cri.go:89] found id: ""
	I1124 09:56:39.135108 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.135115 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:39.135120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:39.135184 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:39.161664 1849924 cri.go:89] found id: ""
	I1124 09:56:39.161678 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.161685 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:39.161691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:39.161749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:39.186843 1849924 cri.go:89] found id: ""
	I1124 09:56:39.186857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.186865 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:39.186870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:39.186930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:39.212864 1849924 cri.go:89] found id: ""
	I1124 09:56:39.212878 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.212889 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:39.212895 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:39.212953 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:39.243329 1849924 cri.go:89] found id: ""
	I1124 09:56:39.243343 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.243350 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:39.243356 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:39.243421 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:39.268862 1849924 cri.go:89] found id: ""
	I1124 09:56:39.268875 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.268883 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:39.268888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:39.268950 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:39.295966 1849924 cri.go:89] found id: ""
	I1124 09:56:39.295979 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.295986 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:39.295993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:39.296004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:39.327310 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:39.327325 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:39.392831 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:39.392850 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:39.407904 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:39.407920 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:39.476692 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.476716 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:39.476729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.055953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:42.067687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:42.067767 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:42.096948 1849924 cri.go:89] found id: ""
	I1124 09:56:42.096963 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.096971 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:42.096977 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:42.097039 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:42.128766 1849924 cri.go:89] found id: ""
	I1124 09:56:42.128781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.128789 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:42.128795 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:42.128861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:42.160266 1849924 cri.go:89] found id: ""
	I1124 09:56:42.160283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.160291 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:42.160297 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:42.160368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:42.191973 1849924 cri.go:89] found id: ""
	I1124 09:56:42.191996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.192004 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:42.192011 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:42.192081 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:42.226204 1849924 cri.go:89] found id: ""
	I1124 09:56:42.226218 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.226226 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:42.226232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:42.226316 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:42.253907 1849924 cri.go:89] found id: ""
	I1124 09:56:42.253922 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.253929 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:42.253935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:42.253998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:42.282770 1849924 cri.go:89] found id: ""
	I1124 09:56:42.282786 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.282793 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:42.282800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:42.282811 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:42.298712 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:42.298729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:42.363239 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:42.363249 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:42.363260 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.437643 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:42.437663 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:42.475221 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:42.475237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:45.048529 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:45.067334 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:45.067432 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:45.099636 1849924 cri.go:89] found id: ""
	I1124 09:56:45.099652 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.099659 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:45.099666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:45.099762 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:45.132659 1849924 cri.go:89] found id: ""
	I1124 09:56:45.132693 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.132701 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:45.132708 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:45.132792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:45.169282 1849924 cri.go:89] found id: ""
	I1124 09:56:45.169306 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.169314 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:45.169320 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:45.169398 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:45.226517 1849924 cri.go:89] found id: ""
	I1124 09:56:45.226533 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.226542 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:45.226548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:45.226626 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:45.265664 1849924 cri.go:89] found id: ""
	I1124 09:56:45.265680 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.265687 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:45.265693 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:45.265759 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:45.298503 1849924 cri.go:89] found id: ""
	I1124 09:56:45.298517 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.298525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:45.298531 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:45.298599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:45.329403 1849924 cri.go:89] found id: ""
	I1124 09:56:45.329436 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.329445 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:45.329453 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:45.329464 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:45.345344 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:45.345361 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:45.412742 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:45.412752 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:45.412763 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:45.493978 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:45.493998 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:45.531425 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:45.531441 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.098018 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:48.108764 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:48.108836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:48.134307 1849924 cri.go:89] found id: ""
	I1124 09:56:48.134321 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.134328 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:48.134333 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:48.134390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:48.159252 1849924 cri.go:89] found id: ""
	I1124 09:56:48.159266 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.159273 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:48.159279 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:48.159337 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:48.184464 1849924 cri.go:89] found id: ""
	I1124 09:56:48.184478 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.184496 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:48.184507 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:48.184589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:48.209500 1849924 cri.go:89] found id: ""
	I1124 09:56:48.209513 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.209520 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:48.209526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:48.209590 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:48.236025 1849924 cri.go:89] found id: ""
	I1124 09:56:48.236039 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.236045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:48.236051 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:48.236121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:48.262196 1849924 cri.go:89] found id: ""
	I1124 09:56:48.262210 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.262216 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:48.262222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:48.262285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:48.286684 1849924 cri.go:89] found id: ""
	I1124 09:56:48.286698 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.286705 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:48.286712 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:48.286725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.354155 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:48.354174 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:48.369606 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:48.369625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:48.436183 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:48.436193 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:48.436207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:48.516667 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:48.516688 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.047020 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:51.057412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:51.057477 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:51.087137 1849924 cri.go:89] found id: ""
	I1124 09:56:51.087159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.087167 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:51.087172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:51.087241 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:51.115003 1849924 cri.go:89] found id: ""
	I1124 09:56:51.115018 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.115025 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:51.115031 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:51.115093 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:51.144604 1849924 cri.go:89] found id: ""
	I1124 09:56:51.144622 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.144631 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:51.144638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:51.144706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:51.172310 1849924 cri.go:89] found id: ""
	I1124 09:56:51.172323 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.172338 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:51.172345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:51.172413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:51.200354 1849924 cri.go:89] found id: ""
	I1124 09:56:51.200376 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.200384 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:51.200390 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:51.200463 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:51.225889 1849924 cri.go:89] found id: ""
	I1124 09:56:51.225903 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.225911 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:51.225917 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:51.225974 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:51.250937 1849924 cri.go:89] found id: ""
	I1124 09:56:51.250950 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.250956 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:51.250972 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:51.250984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.281935 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:51.281951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:51.346955 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:51.346975 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:51.362412 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:51.362428 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:51.424513 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:51.424523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:51.424534 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.006160 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:54.017499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:54.017565 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:54.048035 1849924 cri.go:89] found id: ""
	I1124 09:56:54.048049 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.048056 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:54.048062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:54.048117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:54.075193 1849924 cri.go:89] found id: ""
	I1124 09:56:54.075207 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.075214 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:54.075220 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:54.075278 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:54.101853 1849924 cri.go:89] found id: ""
	I1124 09:56:54.101868 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.101875 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:54.101880 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:54.101938 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:54.128585 1849924 cri.go:89] found id: ""
	I1124 09:56:54.128600 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.128608 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:54.128614 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:54.128673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:54.154726 1849924 cri.go:89] found id: ""
	I1124 09:56:54.154742 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.154750 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:54.154756 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:54.154819 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:54.180936 1849924 cri.go:89] found id: ""
	I1124 09:56:54.180975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.180984 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:54.180990 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:54.181070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:54.209038 1849924 cri.go:89] found id: ""
	I1124 09:56:54.209060 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.209067 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:54.209075 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:54.209085 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:54.279263 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:54.279289 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:54.295105 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:54.295131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:54.367337 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:54.367348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:54.367360 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.442973 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:54.442995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:56.980627 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:56.990375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:56.990434 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:57.016699 1849924 cri.go:89] found id: ""
	I1124 09:56:57.016713 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.016720 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:57.016726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:57.016789 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:57.042924 1849924 cri.go:89] found id: ""
	I1124 09:56:57.042938 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.042945 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:57.042950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:57.043009 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:57.071972 1849924 cri.go:89] found id: ""
	I1124 09:56:57.071986 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.071993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:57.071998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:57.072057 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:57.097765 1849924 cri.go:89] found id: ""
	I1124 09:56:57.097780 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.097789 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:57.097796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:57.097861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:57.124764 1849924 cri.go:89] found id: ""
	I1124 09:56:57.124778 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.124796 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:57.124802 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:57.124871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:57.151558 1849924 cri.go:89] found id: ""
	I1124 09:56:57.151584 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.151591 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:57.151597 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:57.151667 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:57.178335 1849924 cri.go:89] found id: ""
	I1124 09:56:57.178348 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.178355 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:57.178372 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:57.178383 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:57.253968 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:57.253988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:57.284364 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:57.284380 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:57.349827 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:57.349847 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:57.364617 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:57.364633 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:57.425688 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:59.926489 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:59.936801 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:59.936870 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:59.961715 1849924 cri.go:89] found id: ""
	I1124 09:56:59.961728 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.961735 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:59.961741 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:59.961801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:59.990466 1849924 cri.go:89] found id: ""
	I1124 09:56:59.990480 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.990488 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:59.990494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:59.990554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:00.129137 1849924 cri.go:89] found id: ""
	I1124 09:57:00.129161 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.129169 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:00.129175 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:00.129257 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:00.211462 1849924 cri.go:89] found id: ""
	I1124 09:57:00.211478 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.211490 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:00.211506 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:00.211593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:00.274315 1849924 cri.go:89] found id: ""
	I1124 09:57:00.274338 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.274346 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:00.274363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:00.274453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:00.321199 1849924 cri.go:89] found id: ""
	I1124 09:57:00.321233 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.321241 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:00.321247 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:00.321324 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:00.372845 1849924 cri.go:89] found id: ""
	I1124 09:57:00.372861 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.372869 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:00.372878 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:00.372889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:00.444462 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:00.444485 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:00.465343 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:00.465381 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:00.553389 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:00.553402 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:00.553418 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:00.632199 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:00.632219 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:03.162773 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:03.173065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:03.173150 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:03.200418 1849924 cri.go:89] found id: ""
	I1124 09:57:03.200431 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.200439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:03.200444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:03.200502 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:03.227983 1849924 cri.go:89] found id: ""
	I1124 09:57:03.227997 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.228004 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:03.228009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:03.228070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:03.257554 1849924 cri.go:89] found id: ""
	I1124 09:57:03.257568 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.257575 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:03.257581 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:03.257639 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:03.283198 1849924 cri.go:89] found id: ""
	I1124 09:57:03.283210 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.283217 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:03.283223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:03.283280 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:03.307981 1849924 cri.go:89] found id: ""
	I1124 09:57:03.307994 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.308002 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:03.308007 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:03.308063 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:03.337021 1849924 cri.go:89] found id: ""
	I1124 09:57:03.337035 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.337042 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:03.337047 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:03.337130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:03.362116 1849924 cri.go:89] found id: ""
	I1124 09:57:03.362130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.362137 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:03.362144 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:03.362155 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:03.427932 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:03.427951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:03.442952 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:03.442968 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:03.527978 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:03.527989 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:03.528002 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:03.603993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:03.604012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.134966 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:06.147607 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:06.147673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:06.173217 1849924 cri.go:89] found id: ""
	I1124 09:57:06.173231 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.173238 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:06.173243 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:06.173302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:06.203497 1849924 cri.go:89] found id: ""
	I1124 09:57:06.203511 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.203518 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:06.203524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:06.203581 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:06.232192 1849924 cri.go:89] found id: ""
	I1124 09:57:06.232205 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.232212 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:06.232219 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:06.232276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:06.261698 1849924 cri.go:89] found id: ""
	I1124 09:57:06.261711 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.261717 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:06.261723 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:06.261779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:06.286623 1849924 cri.go:89] found id: ""
	I1124 09:57:06.286642 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.286650 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:06.286656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:06.286717 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:06.316085 1849924 cri.go:89] found id: ""
	I1124 09:57:06.316098 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.316105 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:06.316110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:06.316169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:06.344243 1849924 cri.go:89] found id: ""
	I1124 09:57:06.344257 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.344264 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:06.344273 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:06.344283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.375793 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:06.375809 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:06.441133 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:06.441160 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:06.457259 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:06.457282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:06.534017 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:06.534028 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:06.534040 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.110740 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:09.122421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:09.122484 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:09.148151 1849924 cri.go:89] found id: ""
	I1124 09:57:09.148165 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.148172 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:09.148177 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:09.148235 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:09.173265 1849924 cri.go:89] found id: ""
	I1124 09:57:09.173279 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.173288 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:09.173295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:09.173357 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:09.198363 1849924 cri.go:89] found id: ""
	I1124 09:57:09.198377 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.198384 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:09.198389 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:09.198447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:09.224567 1849924 cri.go:89] found id: ""
	I1124 09:57:09.224581 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.224588 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:09.224594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:09.224652 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:09.249182 1849924 cri.go:89] found id: ""
	I1124 09:57:09.249195 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.249205 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:09.249210 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:09.249281 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:09.274039 1849924 cri.go:89] found id: ""
	I1124 09:57:09.274053 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.274060 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:09.274065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:09.274125 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:09.299730 1849924 cri.go:89] found id: ""
	I1124 09:57:09.299744 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.299751 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:09.299758 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:09.299770 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:09.364094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:09.364105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:09.364120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.441482 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:09.441504 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:09.479944 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:09.479961 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:09.549349 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:09.549367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:12.064927 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:12.075315 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:12.075376 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:12.103644 1849924 cri.go:89] found id: ""
	I1124 09:57:12.103658 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.103665 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:12.103670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:12.103774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:12.129120 1849924 cri.go:89] found id: ""
	I1124 09:57:12.129134 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.129141 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:12.129147 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:12.129215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:12.156010 1849924 cri.go:89] found id: ""
	I1124 09:57:12.156024 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.156031 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:12.156036 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:12.156094 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:12.184275 1849924 cri.go:89] found id: ""
	I1124 09:57:12.184289 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.184296 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:12.184301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:12.184362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:12.214700 1849924 cri.go:89] found id: ""
	I1124 09:57:12.214713 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.214726 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:12.214732 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:12.214792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:12.239546 1849924 cri.go:89] found id: ""
	I1124 09:57:12.239559 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.239566 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:12.239572 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:12.239635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:12.264786 1849924 cri.go:89] found id: ""
	I1124 09:57:12.264800 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.264806 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:12.264814 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:12.264826 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:12.324457 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:12.324467 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:12.324477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:12.401396 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:12.401417 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:12.432520 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:12.432535 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:12.502857 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:12.502877 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.018809 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:15.038661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:15.038741 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:15.069028 1849924 cri.go:89] found id: ""
	I1124 09:57:15.069043 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.069050 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:15.069056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:15.069139 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:15.096495 1849924 cri.go:89] found id: ""
	I1124 09:57:15.096513 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.096521 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:15.096526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:15.096593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:15.125417 1849924 cri.go:89] found id: ""
	I1124 09:57:15.125430 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.125438 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:15.125444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:15.125508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:15.152259 1849924 cri.go:89] found id: ""
	I1124 09:57:15.152274 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.152281 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:15.152287 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:15.152348 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:15.178920 1849924 cri.go:89] found id: ""
	I1124 09:57:15.178934 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.178942 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:15.178947 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:15.179024 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:15.207630 1849924 cri.go:89] found id: ""
	I1124 09:57:15.207643 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.207650 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:15.207656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:15.207715 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:15.237971 1849924 cri.go:89] found id: ""
	I1124 09:57:15.237985 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.237992 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:15.238000 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:15.238011 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:15.305169 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:15.305187 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.320240 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:15.320257 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:15.393546 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:15.393556 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:15.393592 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:15.470159 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:15.470179 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:18.001255 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:18.013421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:18.013488 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:18.040787 1849924 cri.go:89] found id: ""
	I1124 09:57:18.040801 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.040808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:18.040814 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:18.040873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:18.066460 1849924 cri.go:89] found id: ""
	I1124 09:57:18.066475 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.066482 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:18.066487 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:18.066544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:18.093970 1849924 cri.go:89] found id: ""
	I1124 09:57:18.093983 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.093990 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:18.093998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:18.094070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:18.119292 1849924 cri.go:89] found id: ""
	I1124 09:57:18.119306 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.119312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:18.119318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:18.119375 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:18.144343 1849924 cri.go:89] found id: ""
	I1124 09:57:18.144356 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.144363 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:18.144369 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:18.144428 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:18.176349 1849924 cri.go:89] found id: ""
	I1124 09:57:18.176362 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.176369 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:18.176375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:18.176435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:18.200900 1849924 cri.go:89] found id: ""
	I1124 09:57:18.200913 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.200920 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:18.200927 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:18.200938 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:18.266434 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:18.266452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:18.281611 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:18.281627 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:18.347510 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:18.347523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:18.347536 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:18.435234 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:18.435254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:20.973569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:20.984347 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:20.984418 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:21.011115 1849924 cri.go:89] found id: ""
	I1124 09:57:21.011130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.011137 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:21.011142 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:21.011204 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:21.041877 1849924 cri.go:89] found id: ""
	I1124 09:57:21.041891 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.041899 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:21.041904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:21.041963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:21.067204 1849924 cri.go:89] found id: ""
	I1124 09:57:21.067217 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.067224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:21.067229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:21.067288 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:21.096444 1849924 cri.go:89] found id: ""
	I1124 09:57:21.096458 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.096464 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:21.096470 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:21.096526 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:21.122011 1849924 cri.go:89] found id: ""
	I1124 09:57:21.122025 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.122033 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:21.122038 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:21.122098 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:21.150504 1849924 cri.go:89] found id: ""
	I1124 09:57:21.150518 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.150525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:21.150530 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:21.150601 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:21.179560 1849924 cri.go:89] found id: ""
	I1124 09:57:21.179573 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.179579 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:21.179587 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:21.179597 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:21.263112 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:21.263134 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:21.291875 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:21.291891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:21.358120 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:21.358139 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:21.373381 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:21.373401 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:21.437277 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:23.938404 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:23.948703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:23.948770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:23.975638 1849924 cri.go:89] found id: ""
	I1124 09:57:23.975653 1849924 logs.go:282] 0 containers: []
	W1124 09:57:23.975660 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:23.975666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:23.975797 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:24.003099 1849924 cri.go:89] found id: ""
	I1124 09:57:24.003114 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.003122 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:24.003127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:24.003195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:24.031320 1849924 cri.go:89] found id: ""
	I1124 09:57:24.031333 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.031340 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:24.031345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:24.031412 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:24.057464 1849924 cri.go:89] found id: ""
	I1124 09:57:24.057479 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.057486 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:24.057491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:24.057560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:24.083571 1849924 cri.go:89] found id: ""
	I1124 09:57:24.083586 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.083593 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:24.083598 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:24.083656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:24.109710 1849924 cri.go:89] found id: ""
	I1124 09:57:24.109724 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.109732 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:24.109737 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:24.109810 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:24.134957 1849924 cri.go:89] found id: ""
	I1124 09:57:24.134971 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.134978 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:24.134985 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:24.134995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:24.206698 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:24.206725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:24.221977 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:24.221995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:24.287450 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:24.287461 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:24.287474 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:24.364870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:24.364890 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:26.899825 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:26.911192 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:26.911260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:26.937341 1849924 cri.go:89] found id: ""
	I1124 09:57:26.937355 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.937361 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:26.937367 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:26.937429 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:26.966037 1849924 cri.go:89] found id: ""
	I1124 09:57:26.966050 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.966057 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:26.966062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:26.966119 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:26.994487 1849924 cri.go:89] found id: ""
	I1124 09:57:26.994501 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.994508 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:26.994514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:26.994572 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:27.024331 1849924 cri.go:89] found id: ""
	I1124 09:57:27.024345 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.024351 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:27.024357 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:27.024414 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:27.051922 1849924 cri.go:89] found id: ""
	I1124 09:57:27.051936 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.051943 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:27.051949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:27.052007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:27.079084 1849924 cri.go:89] found id: ""
	I1124 09:57:27.079097 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.079104 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:27.079110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:27.079166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:27.105333 1849924 cri.go:89] found id: ""
	I1124 09:57:27.105346 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.105362 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:27.105371 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:27.105399 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:27.136135 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:27.136151 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:27.202777 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:27.202797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:27.218147 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:27.218169 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:27.287094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:27.287105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:27.287116 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:29.863883 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:29.874162 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:29.874270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:29.899809 1849924 cri.go:89] found id: ""
	I1124 09:57:29.899825 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.899833 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:29.899839 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:29.899897 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:29.925268 1849924 cri.go:89] found id: ""
	I1124 09:57:29.925282 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.925289 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:29.925295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:29.925355 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:29.953756 1849924 cri.go:89] found id: ""
	I1124 09:57:29.953770 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.953778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:29.953783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:29.953844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:29.979723 1849924 cri.go:89] found id: ""
	I1124 09:57:29.979737 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.979744 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:29.979750 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:29.979809 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:30.029207 1849924 cri.go:89] found id: ""
	I1124 09:57:30.029223 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.029231 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:30.029237 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:30.029307 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:30.086347 1849924 cri.go:89] found id: ""
	I1124 09:57:30.086364 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.086374 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:30.086381 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:30.086453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:30.117385 1849924 cri.go:89] found id: ""
	I1124 09:57:30.117412 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.117420 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:30.117429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:30.117442 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:30.134069 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:30.134089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:30.200106 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:30.200116 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:30.200131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:30.277714 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:30.277734 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:30.306530 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:30.306548 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:32.873889 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:32.884169 1849924 kubeadm.go:602] duration metric: took 4m3.946947382s to restartPrimaryControlPlane
	W1124 09:57:32.884229 1849924 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 09:57:32.884313 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 09:57:33.294612 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:57:33.307085 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:57:33.314867 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:57:33.314936 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:57:33.322582 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:57:33.322593 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 09:57:33.322667 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:57:33.330196 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:57:33.330260 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:57:33.337917 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:57:33.345410 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:57:33.345471 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:57:33.352741 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.360084 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:57:33.360141 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.367359 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:57:33.374680 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:57:33.374740 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:57:33.381720 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:57:33.421475 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:57:33.421672 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:57:33.492568 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:57:33.492631 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:57:33.492668 1849924 kubeadm.go:319] OS: Linux
	I1124 09:57:33.492712 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:57:33.492759 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:57:33.492805 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:57:33.492852 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:57:33.492898 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:57:33.492945 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:57:33.492989 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:57:33.493036 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:57:33.493080 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:57:33.559811 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:57:33.559935 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:57:33.560031 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:57:33.569641 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:57:33.572593 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 09:57:33.572694 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:57:33.572778 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:57:33.572897 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:57:33.572970 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:57:33.573053 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:57:33.573134 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:57:33.573209 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:57:33.573281 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:57:33.573362 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:57:33.573444 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:57:33.573489 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:57:33.573554 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:57:34.404229 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:57:34.574070 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:57:34.974228 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:57:35.133185 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:57:35.260833 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:57:35.261355 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:57:35.265684 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:57:35.269119 1849924 out.go:252]   - Booting up control plane ...
	I1124 09:57:35.269213 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:57:35.269289 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:57:35.269807 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:57:35.284618 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:57:35.284910 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:57:35.293324 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:57:35.293620 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:57:35.293661 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:57:35.424973 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:57:35.425087 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:01:35.425195 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000242606s
	I1124 10:01:35.425226 1849924 kubeadm.go:319] 
	I1124 10:01:35.425316 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:01:35.425374 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:01:35.425488 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:01:35.425495 1849924 kubeadm.go:319] 
	I1124 10:01:35.425617 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:01:35.425655 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:01:35.425685 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:01:35.425690 1849924 kubeadm.go:319] 
	I1124 10:01:35.429378 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:01:35.429792 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:01:35.429899 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:01:35.430134 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:01:35.430138 1849924 kubeadm.go:319] 
	I1124 10:01:35.430206 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1124 10:01:35.430308 1849924 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242606s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1124 10:01:35.430396 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 10:01:35.837421 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:01:35.850299 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:01:35.850356 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:01:35.858169 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:01:35.858180 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 10:01:35.858230 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 10:01:35.866400 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:01:35.866456 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:01:35.873856 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 10:01:35.881958 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:01:35.882015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:01:35.889339 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.896920 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:01:35.896977 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.904670 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 10:01:35.912117 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:01:35.912171 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:01:35.919741 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:01:35.956259 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 10:01:35.956313 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:01:36.031052 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:01:36.031118 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:01:36.031152 1849924 kubeadm.go:319] OS: Linux
	I1124 10:01:36.031196 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:01:36.031243 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:01:36.031289 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:01:36.031336 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:01:36.031383 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:01:36.031430 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:01:36.031474 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:01:36.031521 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:01:36.031566 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:01:36.099190 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:01:36.099321 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:01:36.099441 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 10:01:36.106857 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:01:36.112186 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 10:01:36.112274 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:01:36.112337 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:01:36.112413 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 10:01:36.112473 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 10:01:36.112542 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 10:01:36.112594 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 10:01:36.112656 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 10:01:36.112719 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 10:01:36.112792 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 10:01:36.112863 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 10:01:36.112900 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 10:01:36.112954 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:01:36.197295 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:01:36.531352 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 10:01:36.984185 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:01:37.290064 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:01:37.558441 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:01:37.559017 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:01:37.561758 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:01:37.564997 1849924 out.go:252]   - Booting up control plane ...
	I1124 10:01:37.565117 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:01:37.565200 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:01:37.566811 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:01:37.581952 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:01:37.582056 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 10:01:37.589882 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 10:01:37.590273 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:01:37.590483 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:01:37.733586 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 10:01:37.733692 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:05:37.728742 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000440097s
	I1124 10:05:37.728760 1849924 kubeadm.go:319] 
	I1124 10:05:37.729148 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:05:37.729217 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:05:37.729548 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:05:37.729554 1849924 kubeadm.go:319] 
	I1124 10:05:37.729744 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:05:37.729799 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:05:37.729853 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:05:37.729860 1849924 kubeadm.go:319] 
	I1124 10:05:37.734894 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:05:37.735345 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:05:37.735452 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:05:37.735693 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:05:37.735697 1849924 kubeadm.go:319] 
	I1124 10:05:37.735773 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 10:05:37.735829 1849924 kubeadm.go:403] duration metric: took 12m8.833752588s to StartCluster
	I1124 10:05:37.735872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:05:37.735930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:05:37.769053 1849924 cri.go:89] found id: ""
	I1124 10:05:37.769070 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.769076 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:05:37.769083 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:05:37.769166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:05:37.796753 1849924 cri.go:89] found id: ""
	I1124 10:05:37.796767 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.796774 1849924 logs.go:284] No container was found matching "etcd"
	I1124 10:05:37.796780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:05:37.796839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:05:37.822456 1849924 cri.go:89] found id: ""
	I1124 10:05:37.822470 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.822487 1849924 logs.go:284] No container was found matching "coredns"
	I1124 10:05:37.822492 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:05:37.822556 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:05:37.847572 1849924 cri.go:89] found id: ""
	I1124 10:05:37.847587 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.847594 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:05:37.847601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:05:37.847660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:05:37.874600 1849924 cri.go:89] found id: ""
	I1124 10:05:37.874614 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.874621 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:05:37.874630 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:05:37.874694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:05:37.899198 1849924 cri.go:89] found id: ""
	I1124 10:05:37.899212 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.899220 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:05:37.899226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:05:37.899286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:05:37.927492 1849924 cri.go:89] found id: ""
	I1124 10:05:37.927506 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.927513 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 10:05:37.927521 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 10:05:37.927531 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:05:37.996934 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 10:05:37.996954 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:05:38.018248 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:05:38.018265 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:05:38.095385 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:05:38.095401 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:05:38.095411 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:05:38.170993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 10:05:38.171016 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:05:38.204954 1849924 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 10:05:38.205004 1849924 out.go:285] * 
	W1124 10:05:38.205075 1849924 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.205091 1849924 out.go:285] * 
	W1124 10:05:38.207567 1849924 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:05:38.212617 1849924 out.go:203] 
	W1124 10:05:38.216450 1849924 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.216497 1849924 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 10:05:38.216516 1849924 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 10:05:38.219595 1849924 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338143556Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338188882Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338238203Z" level=info msg="Create NRI interface"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338350467Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338358927Z" level=info msg="runtime interface created"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338371744Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338378472Z" level=info msg="runtime interface starting up..."
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338384536Z" level=info msg="starting plugins..."
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338397081Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338466046Z" level=info msg="No systemd watchdog enabled"
	Nov 24 09:53:27 functional-373432 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.563306518Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=16b574c8-5f01-4b5f-b4c1-033ff8df7e69 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.564186603Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=c16d1184-1db0-41cd-b079-b58f2a21c360 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.564711746Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dcdd2354-d66a-4ea6-b097-17376749f631 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.56539822Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3da7e1e-5602-4f94-87aa-f42cce3f944e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.565983081Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=5aac8310-fbf3-4ab4-abba-3add8b26d6c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.566558752Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9977cb6a-a164-4bf3-8414-583100475093 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.567059862Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5734ab5d-327c-48f0-9238-94a4932df1b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.102671605Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=652a2275-3cb5-4895-9bc9-26b562399a5a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.103518123Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=705a5d4b-cd71-4163-b52e-bdb52326e8e8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.104114725Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=26eba83c-7b31-451a-890a-d51786be660e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.104602994Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d922ec08-bcda-413d-8143-5c97b1367b6e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.10506595Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=470fbbd3-e46c-4376-b51e-18b84b192ec6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.105536807Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c84d94ac-c66d-4a84-b9e9-fb5342a05f00 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.105962946Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c833fe93-752f-447a-94cf-5fbf6c21285a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:39.456103   21983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:39.457137   21983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:39.458705   21983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:39.459053   21983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:39.460644   21983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:05:39 up  8:48,  0 user,  load average: 0.05, 0.18, 0.36
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:05:37 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:37 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Nov 24 10:05:37 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:37 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:38 functional-373432 kubelet[21854]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:38 functional-373432 kubelet[21854]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:38 functional-373432 kubelet[21854]: E1124 10:05:38.039901   21854 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:38 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:38 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:38 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Nov 24 10:05:38 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:38 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:38 functional-373432 kubelet[21898]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:38 functional-373432 kubelet[21898]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:38 functional-373432 kubelet[21898]: E1124 10:05:38.763562   21898 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:38 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:38 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:39 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Nov 24 10:05:39 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:39 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:39 functional-373432 kubelet[21988]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:39 functional-373432 kubelet[21988]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:39 functional-373432 kubelet[21988]: E1124 10:05:39.521719   21988 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:39 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:39 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (370.182819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (737.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-373432 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-373432 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (64.342967ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-373432 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (294.410397ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-498341 image ls                                                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format json --alsologtostderr                                                                                        │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ image          │ functional-498341 image ls --format table --alsologtostderr                                                                                       │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ update-context │ functional-498341 update-context --alsologtostderr -v=2                                                                                           │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:37 UTC │ 24 Nov 25 09:37 UTC │
	│ delete         │ -p functional-498341                                                                                                                              │ functional-498341 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │ 24 Nov 25 09:38 UTC │
	│ start          │ -p functional-373432 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:38 UTC │                     │
	│ start          │ -p functional-373432 --alsologtostderr -v=8                                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:46 UTC │                     │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add registry.k8s.io/pause:latest                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache add minikube-local-cache-test:functional-373432                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ functional-373432 cache delete minikube-local-cache-test:functional-373432                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl images                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ cache          │ functional-373432 cache reload                                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh            │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ kubectl        │ functional-373432 kubectl -- --context functional-373432 get pods                                                                                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ start          │ -p functional-373432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:53:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:53:23.394373 1849924 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:53:23.394473 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394476 1849924 out.go:374] Setting ErrFile to fd 2...
	I1124 09:53:23.394480 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394868 1849924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:53:23.395314 1849924 out.go:368] Setting JSON to false
	I1124 09:53:23.396438 1849924 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30954,"bootTime":1763947050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:53:23.396523 1849924 start.go:143] virtualization:  
	I1124 09:53:23.399850 1849924 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:53:23.403618 1849924 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:53:23.403698 1849924 notify.go:221] Checking for updates...
	I1124 09:53:23.409546 1849924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:53:23.412497 1849924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:53:23.415264 1849924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:53:23.418109 1849924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:53:23.420908 1849924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:53:23.424158 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:23.424263 1849924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:53:23.449398 1849924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:53:23.449524 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.505939 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.496540271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.506033 1849924 docker.go:319] overlay module found
	I1124 09:53:23.509224 1849924 out.go:179] * Using the docker driver based on existing profile
	I1124 09:53:23.512245 1849924 start.go:309] selected driver: docker
	I1124 09:53:23.512255 1849924 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.512340 1849924 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:53:23.512454 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.568317 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.558792888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.568738 1849924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:53:23.568763 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:23.568821 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:23.568862 1849924 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.571988 1849924 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:53:23.574929 1849924 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:53:23.577959 1849924 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:53:23.580671 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:23.580735 1849924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:53:23.600479 1849924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:53:23.600490 1849924 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:53:23.634350 1849924 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:53:24.054820 1849924 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:53:24.054990 1849924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:53:24.055122 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.055240 1849924 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:53:24.055269 1849924 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.055313 1849924 start.go:364] duration metric: took 27.192µs to acquireMachinesLock for "functional-373432"
	I1124 09:53:24.055327 1849924 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:53:24.055331 1849924 fix.go:54] fixHost starting: 
	I1124 09:53:24.055580 1849924 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:53:24.072844 1849924 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:53:24.072865 1849924 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:53:24.076050 1849924 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:53:24.076079 1849924 machine.go:94] provisionDockerMachine start ...
	I1124 09:53:24.076162 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.100870 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.101221 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.101228 1849924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:53:24.232623 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.252893 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.252907 1849924 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:53:24.252988 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.280057 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.280362 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.280376 1849924 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:53:24.402975 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.467980 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.468079 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.499770 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.500067 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.500084 1849924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:53:24.556663 1849924 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556759 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:53:24.556767 1849924 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.133µs
	I1124 09:53:24.556774 1849924 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:53:24.556785 1849924 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556814 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:53:24.556818 1849924 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.266µs
	I1124 09:53:24.556823 1849924 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556832 1849924 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556867 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:53:24.556871 1849924 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 39.738µs
	I1124 09:53:24.556876 1849924 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556884 1849924 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556911 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:53:24.556915 1849924 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.655µs
	I1124 09:53:24.556920 1849924 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556934 1849924 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556959 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:53:24.556963 1849924 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 35.948µs
	I1124 09:53:24.556967 1849924 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556975 1849924 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556999 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:53:24.557011 1849924 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.226µs
	I1124 09:53:24.557015 1849924 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:53:24.557023 1849924 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557048 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:53:24.557051 1849924 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 29.202µs
	I1124 09:53:24.557056 1849924 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:53:24.557065 1849924 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557089 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:53:24.557093 1849924 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.258µs
	I1124 09:53:24.557097 1849924 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:53:24.557129 1849924 cache.go:87] Successfully saved all images to host disk.
	I1124 09:53:24.653937 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:53:24.653952 1849924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:53:24.653984 1849924 ubuntu.go:190] setting up certificates
	I1124 09:53:24.653993 1849924 provision.go:84] configureAuth start
	I1124 09:53:24.654058 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:24.671316 1849924 provision.go:143] copyHostCerts
	I1124 09:53:24.671391 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:53:24.671399 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:53:24.671473 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:53:24.671573 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:53:24.671577 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:53:24.671611 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:53:24.671659 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:53:24.671662 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:53:24.671684 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:53:24.671727 1849924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:53:25.074688 1849924 provision.go:177] copyRemoteCerts
	I1124 09:53:25.074752 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:53:25.074789 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.095886 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.200905 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:53:25.221330 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:53:25.243399 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:53:25.263746 1849924 provision.go:87] duration metric: took 609.720286ms to configureAuth
	I1124 09:53:25.263762 1849924 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:53:25.263945 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:25.264045 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.283450 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:25.283754 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:25.283770 1849924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:53:25.632249 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:53:25.632261 1849924 machine.go:97] duration metric: took 1.556176004s to provisionDockerMachine
	I1124 09:53:25.632272 1849924 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:53:25.632283 1849924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:53:25.632368 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:53:25.632405 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.650974 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.756910 1849924 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:53:25.760285 1849924 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:53:25.760302 1849924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:53:25.760312 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:53:25.760370 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:53:25.760445 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:53:25.760518 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:53:25.760561 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:53:25.767953 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:25.785397 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:53:25.802531 1849924 start.go:296] duration metric: took 170.24573ms for postStartSetup
	I1124 09:53:25.802613 1849924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:53:25.802665 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.819451 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.922232 1849924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:53:25.926996 1849924 fix.go:56] duration metric: took 1.871657348s for fixHost
	I1124 09:53:25.927011 1849924 start.go:83] releasing machines lock for "functional-373432", held for 1.871691088s
	I1124 09:53:25.927085 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:25.943658 1849924 ssh_runner.go:195] Run: cat /version.json
	I1124 09:53:25.943696 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.943958 1849924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:53:25.944002 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.980808 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.985182 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:26.175736 1849924 ssh_runner.go:195] Run: systemctl --version
	I1124 09:53:26.181965 1849924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:53:26.217601 1849924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:53:26.221860 1849924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:53:26.221923 1849924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:53:26.229857 1849924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:53:26.229870 1849924 start.go:496] detecting cgroup driver to use...
	I1124 09:53:26.229899 1849924 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:53:26.229945 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:53:26.244830 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:53:26.257783 1849924 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:53:26.257835 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:53:26.273202 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:53:26.286089 1849924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:53:26.392939 1849924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:53:26.505658 1849924 docker.go:234] disabling docker service ...
	I1124 09:53:26.505717 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:53:26.520682 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:53:26.533901 1849924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:53:26.643565 1849924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:53:26.781643 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:53:26.794102 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:53:26.807594 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:26.964951 1849924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:53:26.965014 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.974189 1849924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:53:26.974248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.982757 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.991310 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.000248 1849924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:53:27.009837 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.019258 1849924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.028248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.037276 1849924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:53:27.045218 1849924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:53:27.052631 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:27.162722 1849924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:53:27.344834 1849924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:53:27.344893 1849924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:53:27.348791 1849924 start.go:564] Will wait 60s for crictl version
	I1124 09:53:27.348847 1849924 ssh_runner.go:195] Run: which crictl
	I1124 09:53:27.352314 1849924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:53:27.376797 1849924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:53:27.376884 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.404280 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.437171 1849924 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:53:27.439969 1849924 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:53:27.457621 1849924 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:53:27.466585 1849924 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 09:53:27.469312 1849924 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:53:27.469546 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.636904 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.787069 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.940573 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:27.940635 1849924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:53:27.974420 1849924 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:53:27.974431 1849924 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:53:27.974436 1849924 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:53:27.974527 1849924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:53:27.974612 1849924 ssh_runner.go:195] Run: crio config
	I1124 09:53:28.037679 1849924 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 09:53:28.037700 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:28.037709 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:28.037724 1849924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:53:28.037750 1849924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:53:28.037877 1849924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:53:28.037948 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:53:28.045873 1849924 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:53:28.045941 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:53:28.053444 1849924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:53:28.066325 1849924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:53:28.079790 1849924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1124 09:53:28.092701 1849924 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:53:28.096834 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:28.213078 1849924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:53:28.235943 1849924 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:53:28.235953 1849924 certs.go:195] generating shared ca certs ...
	I1124 09:53:28.235988 1849924 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:53:28.236165 1849924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:53:28.236216 1849924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:53:28.236222 1849924 certs.go:257] generating profile certs ...
	I1124 09:53:28.236320 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:53:28.236381 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:53:28.236430 1849924 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:53:28.236545 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:53:28.236581 1849924 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:53:28.236590 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:53:28.236617 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:53:28.236639 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:53:28.236676 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:53:28.236733 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:28.237452 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:53:28.267491 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:53:28.288261 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:53:28.304655 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:53:28.321607 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:53:28.339914 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:53:28.357697 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:53:28.374827 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:53:28.392170 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:53:28.410757 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:53:28.428776 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:53:28.446790 1849924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:53:28.459992 1849924 ssh_runner.go:195] Run: openssl version
	I1124 09:53:28.466084 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:53:28.474433 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478225 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478282 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.521415 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:53:28.529784 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:53:28.538178 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542108 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542164 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.583128 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:53:28.591113 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:53:28.599457 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603413 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603474 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.645543 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:53:28.653724 1849924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:53:28.657603 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:53:28.698734 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:53:28.739586 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:53:28.780289 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:53:28.820840 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:53:28.861343 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:53:28.902087 1849924 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:28.902167 1849924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:53:28.902236 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.929454 1849924 cri.go:89] found id: ""
	I1124 09:53:28.929519 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:53:28.937203 1849924 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:53:28.937213 1849924 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:53:28.937261 1849924 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:53:28.944668 1849924 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:28.945209 1849924 kubeconfig.go:125] found "functional-373432" server: "https://192.168.49.2:8441"
	I1124 09:53:28.946554 1849924 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:53:28.956044 1849924 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 09:38:48.454819060 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 09:53:28.085978644 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 09:53:28.956053 1849924 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:53:28.956064 1849924 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:53:28.956128 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.991786 1849924 cri.go:89] found id: ""
	I1124 09:53:28.991878 1849924 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:53:29.009992 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:53:29.018335 1849924 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 09:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 09:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Nov 24 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 24 09:42 /etc/kubernetes/scheduler.conf
	
	I1124 09:53:29.018393 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:53:29.026350 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:53:29.034215 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.034271 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:53:29.042061 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.049959 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.050015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.057477 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:53:29.065397 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.065453 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:53:29.072838 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:53:29.080812 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:29.126682 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:30.915283 1849924 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.788534288s)
	I1124 09:53:30.915375 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.124806 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.187302 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.234732 1849924 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:53:31.234802 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:31.735292 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.235922 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.735385 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.235894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.734984 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.235509 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.735644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.235724 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.235151 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.734994 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.235505 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.734925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.235891 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.235854 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.235929 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.734921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.234991 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.235015 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.734874 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.235403 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.734996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.235058 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.735496 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.235113 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.735894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.234930 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.735636 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.234914 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.734875 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.235656 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.735578 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.235469 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.735823 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.235926 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.235524 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.735679 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.235407 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.735614 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.235868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.734868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.235806 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.735801 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.235315 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.735919 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.735842 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.235491 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.235122 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.735029 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.235002 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.735695 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.236092 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.735024 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.235917 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.735341 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.235291 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.735026 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.235183 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.735898 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.235334 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.234896 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.735246 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.235531 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.235579 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.735599 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.234953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.734946 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.235705 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.735908 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.234909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.735831 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.235563 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.735909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.234992 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.735855 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.234936 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.734993 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.235585 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.235013 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.735371 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.235016 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.735593 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.735653 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.235793 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.734939 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.235317 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.735001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.235075 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.234969 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.735715 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.234859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.735010 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.235545 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.735305 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.235127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.734989 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.235601 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.734933 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.234986 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.735250 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.235727 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.734976 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.235644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.735675 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.735127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:31.234921 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:31.235007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:31.266239 1849924 cri.go:89] found id: ""
	I1124 09:54:31.266252 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.266259 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:31.266265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:31.266323 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:31.294586 1849924 cri.go:89] found id: ""
	I1124 09:54:31.294608 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.294616 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:31.294623 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:31.294694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:31.322061 1849924 cri.go:89] found id: ""
	I1124 09:54:31.322076 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.322083 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:31.322088 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:31.322159 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:31.349139 1849924 cri.go:89] found id: ""
	I1124 09:54:31.349154 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.349161 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:31.349167 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:31.349230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:31.379824 1849924 cri.go:89] found id: ""
	I1124 09:54:31.379838 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.379845 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:31.379850 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:31.379915 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:31.407206 1849924 cri.go:89] found id: ""
	I1124 09:54:31.407220 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.407228 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:31.407233 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:31.407296 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:31.435102 1849924 cri.go:89] found id: ""
	I1124 09:54:31.435117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.435123 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:31.435132 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:31.435143 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:31.504759 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:31.504779 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:31.520567 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:31.520584 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:31.587634 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:31.587666 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:31.587680 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:31.665843 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:31.665864 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.199426 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:34.210826 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:34.210886 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:34.249730 1849924 cri.go:89] found id: ""
	I1124 09:54:34.249743 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.249769 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:34.249774 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:34.249844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:34.279157 1849924 cri.go:89] found id: ""
	I1124 09:54:34.279171 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.279178 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:34.279183 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:34.279253 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:34.305617 1849924 cri.go:89] found id: ""
	I1124 09:54:34.305631 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.305655 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:34.305661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:34.305730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:34.331221 1849924 cri.go:89] found id: ""
	I1124 09:54:34.331235 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.331243 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:34.331249 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:34.331309 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:34.357361 1849924 cri.go:89] found id: ""
	I1124 09:54:34.357374 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.357381 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:34.357387 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:34.357447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:34.382790 1849924 cri.go:89] found id: ""
	I1124 09:54:34.382805 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.382812 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:34.382817 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:34.382882 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:34.408622 1849924 cri.go:89] found id: ""
	I1124 09:54:34.408635 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.408653 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:34.408661 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:34.408673 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:34.473355 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:34.473365 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:34.473376 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:34.560903 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:34.560924 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.589722 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:34.589738 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:34.659382 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:34.659407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.175501 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:37.187020 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:37.187082 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:37.215497 1849924 cri.go:89] found id: ""
	I1124 09:54:37.215511 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.215518 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:37.215524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:37.215584 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:37.252296 1849924 cri.go:89] found id: ""
	I1124 09:54:37.252310 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.252317 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:37.252323 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:37.252383 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:37.281216 1849924 cri.go:89] found id: ""
	I1124 09:54:37.281230 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.281237 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:37.281242 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:37.281302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:37.307335 1849924 cri.go:89] found id: ""
	I1124 09:54:37.307349 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.307356 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:37.307361 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:37.307435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:37.333186 1849924 cri.go:89] found id: ""
	I1124 09:54:37.333209 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.333217 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:37.333222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:37.333290 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:37.358046 1849924 cri.go:89] found id: ""
	I1124 09:54:37.358060 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.358068 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:37.358074 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:37.358130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:37.388252 1849924 cri.go:89] found id: ""
	I1124 09:54:37.388265 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.388273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:37.388280 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:37.388291 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:37.423715 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:37.423740 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:37.490800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:37.490819 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.506370 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:37.506387 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:37.571587 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:37.571597 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:37.571608 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.152603 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:40.164138 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:40.164210 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:40.192566 1849924 cri.go:89] found id: ""
	I1124 09:54:40.192581 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.192589 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:40.192594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:40.192677 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:40.233587 1849924 cri.go:89] found id: ""
	I1124 09:54:40.233616 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.233623 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:40.233628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:40.233702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:40.268152 1849924 cri.go:89] found id: ""
	I1124 09:54:40.268166 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.268173 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:40.268178 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:40.268258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:40.297572 1849924 cri.go:89] found id: ""
	I1124 09:54:40.297586 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.297593 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:40.297605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:40.297666 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:40.328480 1849924 cri.go:89] found id: ""
	I1124 09:54:40.328502 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.328511 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:40.328517 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:40.328583 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:40.354088 1849924 cri.go:89] found id: ""
	I1124 09:54:40.354102 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.354108 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:40.354114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:40.354172 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:40.384758 1849924 cri.go:89] found id: ""
	I1124 09:54:40.384772 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.384779 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:40.384786 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:40.384797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:40.452137 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:40.452157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:40.467741 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:40.467757 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:40.535224 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:40.535235 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:40.535246 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.615981 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:40.616005 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:43.148076 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:43.158106 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:43.158169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:43.182985 1849924 cri.go:89] found id: ""
	I1124 09:54:43.182999 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.183006 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:43.183012 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:43.183068 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:43.215806 1849924 cri.go:89] found id: ""
	I1124 09:54:43.215820 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.215837 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:43.215844 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:43.215903 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:43.244278 1849924 cri.go:89] found id: ""
	I1124 09:54:43.244301 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.244309 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:43.244314 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:43.244385 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:43.272908 1849924 cri.go:89] found id: ""
	I1124 09:54:43.272931 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.272938 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:43.272949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:43.273029 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:43.297907 1849924 cri.go:89] found id: ""
	I1124 09:54:43.297921 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.297927 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:43.297933 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:43.298008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:43.330376 1849924 cri.go:89] found id: ""
	I1124 09:54:43.330391 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.330397 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:43.330403 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:43.330459 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:43.359850 1849924 cri.go:89] found id: ""
	I1124 09:54:43.359864 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.359871 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:43.359879 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:43.359898 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:43.426992 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:43.427012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:43.441799 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:43.441816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:43.504072 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:43.504082 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:43.504093 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:43.585362 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:43.585390 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.114191 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:46.124223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:46.124285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:46.151013 1849924 cri.go:89] found id: ""
	I1124 09:54:46.151027 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.151034 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:46.151039 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:46.151096 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:46.177170 1849924 cri.go:89] found id: ""
	I1124 09:54:46.177184 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.177191 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:46.177196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:46.177258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:46.205800 1849924 cri.go:89] found id: ""
	I1124 09:54:46.205814 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.205822 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:46.205828 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:46.205893 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:46.239665 1849924 cri.go:89] found id: ""
	I1124 09:54:46.239689 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.239697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:46.239702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:46.239782 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:46.274455 1849924 cri.go:89] found id: ""
	I1124 09:54:46.274480 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.274488 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:46.274494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:46.274574 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:46.300659 1849924 cri.go:89] found id: ""
	I1124 09:54:46.300673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.300680 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:46.300686 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:46.300760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:46.326694 1849924 cri.go:89] found id: ""
	I1124 09:54:46.326708 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.326715 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:46.326723 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:46.326735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:46.389430 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:46.389441 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:46.389452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:46.467187 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:46.467207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.499873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:46.499889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:46.574600 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:46.574626 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.092671 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:49.102878 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:49.102942 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:49.130409 1849924 cri.go:89] found id: ""
	I1124 09:54:49.130431 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.130439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:49.130445 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:49.130508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:49.156861 1849924 cri.go:89] found id: ""
	I1124 09:54:49.156874 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.156891 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:49.156897 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:49.156964 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:49.183346 1849924 cri.go:89] found id: ""
	I1124 09:54:49.183369 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.183376 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:49.183382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:49.183442 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:49.217035 1849924 cri.go:89] found id: ""
	I1124 09:54:49.217049 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.217056 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:49.217062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:49.217146 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:49.245694 1849924 cri.go:89] found id: ""
	I1124 09:54:49.245713 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.245720 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:49.245726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:49.245891 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:49.284969 1849924 cri.go:89] found id: ""
	I1124 09:54:49.284983 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.284990 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:49.284995 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:49.285055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:49.314521 1849924 cri.go:89] found id: ""
	I1124 09:54:49.314535 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.314542 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:49.314549 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:49.314560 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:49.398958 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:49.398979 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:49.428494 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:49.428511 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:49.497701 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:49.497725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.513336 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:49.513352 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:49.581585 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.081862 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:52.092629 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:52.092692 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:52.124453 1849924 cri.go:89] found id: ""
	I1124 09:54:52.124475 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.124482 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:52.124488 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:52.124546 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:52.151758 1849924 cri.go:89] found id: ""
	I1124 09:54:52.151771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.151778 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:52.151784 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:52.151844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:52.176757 1849924 cri.go:89] found id: ""
	I1124 09:54:52.176771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.176778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:52.176783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:52.176846 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:52.201940 1849924 cri.go:89] found id: ""
	I1124 09:54:52.201954 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.201961 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:52.201967 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:52.202025 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:52.248612 1849924 cri.go:89] found id: ""
	I1124 09:54:52.248625 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.248632 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:52.248638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:52.248713 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:52.279382 1849924 cri.go:89] found id: ""
	I1124 09:54:52.279396 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.279404 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:52.279409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:52.279471 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:52.308695 1849924 cri.go:89] found id: ""
	I1124 09:54:52.308709 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.308717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:52.308724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:52.308735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:52.376027 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:52.376050 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:52.391327 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:52.391343 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:52.459367 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.459377 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:52.459389 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:52.535870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:52.535893 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:55.066284 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:55.077139 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:55.077203 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:55.105400 1849924 cri.go:89] found id: ""
	I1124 09:54:55.105498 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.105506 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:55.105512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:55.105620 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:55.136637 1849924 cri.go:89] found id: ""
	I1124 09:54:55.136651 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.136659 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:55.136664 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:55.136729 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:55.164659 1849924 cri.go:89] found id: ""
	I1124 09:54:55.164673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.164680 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:55.164685 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:55.164749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:55.190091 1849924 cri.go:89] found id: ""
	I1124 09:54:55.190117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.190124 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:55.190129 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:55.190191 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:55.224336 1849924 cri.go:89] found id: ""
	I1124 09:54:55.224351 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.224358 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:55.224363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:55.224424 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:55.259735 1849924 cri.go:89] found id: ""
	I1124 09:54:55.259748 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.259755 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:55.259761 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:55.259821 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:55.290052 1849924 cri.go:89] found id: ""
	I1124 09:54:55.290065 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.290072 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:55.290079 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:55.290090 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:55.355938 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:55.355957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:55.371501 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:55.371518 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:55.437126 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:55.437140 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:55.437152 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:55.515834 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:55.515854 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.048421 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:58.059495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:58.059560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:58.087204 1849924 cri.go:89] found id: ""
	I1124 09:54:58.087219 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.087226 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:58.087232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:58.087292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:58.118248 1849924 cri.go:89] found id: ""
	I1124 09:54:58.118262 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.118270 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:58.118276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:58.118336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:58.144878 1849924 cri.go:89] found id: ""
	I1124 09:54:58.144892 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.144899 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:58.144905 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:58.144963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:58.171781 1849924 cri.go:89] found id: ""
	I1124 09:54:58.171795 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.171814 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:58.171820 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:58.171898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:58.200885 1849924 cri.go:89] found id: ""
	I1124 09:54:58.200907 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.200915 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:58.200920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:58.200993 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:58.231674 1849924 cri.go:89] found id: ""
	I1124 09:54:58.231688 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.231695 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:58.231718 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:58.231792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:58.266664 1849924 cri.go:89] found id: ""
	I1124 09:54:58.266679 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.266686 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:58.266694 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:58.266705 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.300806 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:58.300822 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:58.367929 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:58.367949 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:58.383950 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:58.383967 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:58.449243 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:58.449254 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:58.449279 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:01.029569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:01.040150 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:01.040231 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:01.067942 1849924 cri.go:89] found id: ""
	I1124 09:55:01.067955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.067962 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:01.067968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:01.068031 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:01.095348 1849924 cri.go:89] found id: ""
	I1124 09:55:01.095362 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.095369 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:01.095375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:01.095436 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:01.125781 1849924 cri.go:89] found id: ""
	I1124 09:55:01.125795 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.125803 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:01.125808 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:01.125871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:01.153546 1849924 cri.go:89] found id: ""
	I1124 09:55:01.153561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.153568 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:01.153575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:01.153643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:01.183965 1849924 cri.go:89] found id: ""
	I1124 09:55:01.183980 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.183987 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:01.183993 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:01.184055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:01.218518 1849924 cri.go:89] found id: ""
	I1124 09:55:01.218533 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.218541 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:01.218548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:01.218628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:01.255226 1849924 cri.go:89] found id: ""
	I1124 09:55:01.255241 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.255248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:01.255255 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:01.255266 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:01.290705 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:01.290723 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:01.362275 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:01.362296 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:01.378338 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:01.378357 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:01.447338 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:01.447348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:01.447359 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.029431 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:04.039677 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:04.039753 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:04.064938 1849924 cri.go:89] found id: ""
	I1124 09:55:04.064952 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.064968 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:04.064975 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:04.065032 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:04.091065 1849924 cri.go:89] found id: ""
	I1124 09:55:04.091079 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.091087 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:04.091092 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:04.091155 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:04.119888 1849924 cri.go:89] found id: ""
	I1124 09:55:04.119902 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.119910 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:04.119915 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:04.119990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:04.145893 1849924 cri.go:89] found id: ""
	I1124 09:55:04.145907 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.145914 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:04.145920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:04.145981 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:04.172668 1849924 cri.go:89] found id: ""
	I1124 09:55:04.172682 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.172689 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:04.172695 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:04.172770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:04.199546 1849924 cri.go:89] found id: ""
	I1124 09:55:04.199559 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.199576 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:04.199582 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:04.199654 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:04.233837 1849924 cri.go:89] found id: ""
	I1124 09:55:04.233850 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.233857 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:04.233865 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:04.233875 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:04.312846 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:04.312868 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:04.328376 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:04.328393 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:04.392893 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:04.392903 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:04.392914 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.474469 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:04.474497 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.002775 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:07.014668 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:07.014734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:07.041533 1849924 cri.go:89] found id: ""
	I1124 09:55:07.041549 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.041556 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:07.041563 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:07.041628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:07.071414 1849924 cri.go:89] found id: ""
	I1124 09:55:07.071429 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.071436 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:07.071442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:07.071500 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:07.102622 1849924 cri.go:89] found id: ""
	I1124 09:55:07.102637 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.102644 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:07.102650 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:07.102708 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:07.127684 1849924 cri.go:89] found id: ""
	I1124 09:55:07.127713 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.127720 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:07.127726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:07.127792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:07.153696 1849924 cri.go:89] found id: ""
	I1124 09:55:07.153710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.153718 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:07.153724 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:07.153785 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:07.186158 1849924 cri.go:89] found id: ""
	I1124 09:55:07.186180 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.186187 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:07.186193 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:07.186252 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:07.217520 1849924 cri.go:89] found id: ""
	I1124 09:55:07.217554 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.217562 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:07.217570 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:07.217580 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.247265 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:07.247288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:07.320517 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:07.320537 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:07.336358 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:07.336373 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:07.403281 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:07.403292 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:07.403302 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:09.981463 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:09.992128 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:09.992195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:10.021174 1849924 cri.go:89] found id: ""
	I1124 09:55:10.021189 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.021197 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:10.021203 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:10.021267 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:10.049180 1849924 cri.go:89] found id: ""
	I1124 09:55:10.049194 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.049202 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:10.049207 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:10.049270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:10.078645 1849924 cri.go:89] found id: ""
	I1124 09:55:10.078660 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.078667 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:10.078673 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:10.078734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:10.106290 1849924 cri.go:89] found id: ""
	I1124 09:55:10.106304 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.106312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:10.106318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:10.106390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:10.133401 1849924 cri.go:89] found id: ""
	I1124 09:55:10.133455 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.133462 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:10.133468 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:10.133544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:10.162805 1849924 cri.go:89] found id: ""
	I1124 09:55:10.162820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.162827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:10.162833 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:10.162890 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:10.189156 1849924 cri.go:89] found id: ""
	I1124 09:55:10.189170 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.189177 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:10.189185 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:10.189206 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:10.280238 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:10.280247 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:10.280258 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:10.359007 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:10.359031 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:10.395999 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:10.396024 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:10.462661 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:10.462683 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:12.979323 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:12.989228 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:12.989300 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:13.016908 1849924 cri.go:89] found id: ""
	I1124 09:55:13.016922 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.016929 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:13.016935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:13.016998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:13.044445 1849924 cri.go:89] found id: ""
	I1124 09:55:13.044467 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.044474 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:13.044480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:13.044547 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:13.070357 1849924 cri.go:89] found id: ""
	I1124 09:55:13.070379 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.070387 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:13.070392 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:13.070461 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:13.098253 1849924 cri.go:89] found id: ""
	I1124 09:55:13.098267 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.098274 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:13.098280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:13.098339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:13.124183 1849924 cri.go:89] found id: ""
	I1124 09:55:13.124196 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.124203 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:13.124209 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:13.124269 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:13.150521 1849924 cri.go:89] found id: ""
	I1124 09:55:13.150536 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.150543 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:13.150549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:13.150619 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:13.181696 1849924 cri.go:89] found id: ""
	I1124 09:55:13.181710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.181717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:13.181724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:13.181735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:13.250758 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:13.250778 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:13.271249 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:13.271264 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:13.332213 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:13.332223 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:13.332235 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:13.409269 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:13.409293 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:15.940893 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:15.951127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:15.951201 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:15.976744 1849924 cri.go:89] found id: ""
	I1124 09:55:15.976767 1849924 logs.go:282] 0 containers: []
	W1124 09:55:15.976774 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:15.976780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:15.976848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:16.005218 1849924 cri.go:89] found id: ""
	I1124 09:55:16.005235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.005245 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:16.005251 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:16.005336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:16.036862 1849924 cri.go:89] found id: ""
	I1124 09:55:16.036888 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.036896 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:16.036902 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:16.036990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:16.063354 1849924 cri.go:89] found id: ""
	I1124 09:55:16.063369 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.063376 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:16.063382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:16.063455 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:16.092197 1849924 cri.go:89] found id: ""
	I1124 09:55:16.092211 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.092218 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:16.092224 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:16.092286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:16.117617 1849924 cri.go:89] found id: ""
	I1124 09:55:16.117631 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.117639 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:16.117644 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:16.117702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:16.143200 1849924 cri.go:89] found id: ""
	I1124 09:55:16.143214 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.143220 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:16.143228 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:16.143239 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:16.171873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:16.171889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:16.247500 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:16.247519 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:16.267064 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:16.267080 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:16.337347 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:16.337357 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:16.337368 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:18.916700 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:18.927603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:18.927697 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:18.958633 1849924 cri.go:89] found id: ""
	I1124 09:55:18.958649 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.958656 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:18.958662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:18.958725 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:18.988567 1849924 cri.go:89] found id: ""
	I1124 09:55:18.988582 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.988589 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:18.988594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:18.988665 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:19.016972 1849924 cri.go:89] found id: ""
	I1124 09:55:19.016986 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.016993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:19.016999 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:19.017058 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:19.042806 1849924 cri.go:89] found id: ""
	I1124 09:55:19.042827 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.042835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:19.042841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:19.042905 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:19.073274 1849924 cri.go:89] found id: ""
	I1124 09:55:19.073288 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.073296 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:19.073301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:19.073368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:19.099687 1849924 cri.go:89] found id: ""
	I1124 09:55:19.099701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.099708 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:19.099714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:19.099780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:19.126512 1849924 cri.go:89] found id: ""
	I1124 09:55:19.126526 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.126532 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:19.126540 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:19.126550 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:19.194410 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:19.194430 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:19.216505 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:19.216527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:19.291566 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:19.291578 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:19.291591 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:19.371192 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:19.371213 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:21.902356 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:21.912405 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:21.912468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:21.937243 1849924 cri.go:89] found id: ""
	I1124 09:55:21.937256 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.937270 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:21.937276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:21.937335 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:21.963054 1849924 cri.go:89] found id: ""
	I1124 09:55:21.963068 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.963075 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:21.963080 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:21.963136 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:21.988695 1849924 cri.go:89] found id: ""
	I1124 09:55:21.988708 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.988715 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:21.988722 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:21.988780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:22.015029 1849924 cri.go:89] found id: ""
	I1124 09:55:22.015043 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.015050 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:22.015056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:22.015117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:22.044828 1849924 cri.go:89] found id: ""
	I1124 09:55:22.044843 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.044851 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:22.044857 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:22.044919 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:22.071875 1849924 cri.go:89] found id: ""
	I1124 09:55:22.071889 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.071897 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:22.071903 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:22.071970 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:22.099237 1849924 cri.go:89] found id: ""
	I1124 09:55:22.099252 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.099259 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:22.099267 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:22.099278 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:22.170156 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:22.170176 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:22.185271 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:22.185288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:22.271963 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:22.271973 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:22.271984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:22.349426 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:22.349447 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:24.878185 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:24.888725 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:24.888800 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:24.915846 1849924 cri.go:89] found id: ""
	I1124 09:55:24.915860 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.915867 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:24.915872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:24.915931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:24.944104 1849924 cri.go:89] found id: ""
	I1124 09:55:24.944118 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.944125 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:24.944131 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:24.944196 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:24.970424 1849924 cri.go:89] found id: ""
	I1124 09:55:24.970438 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.970445 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:24.970450 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:24.970511 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:24.999941 1849924 cri.go:89] found id: ""
	I1124 09:55:24.999955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.999962 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:24.999968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:25.000027 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:25.030682 1849924 cri.go:89] found id: ""
	I1124 09:55:25.030700 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.030707 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:25.030714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:25.030788 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:25.061169 1849924 cri.go:89] found id: ""
	I1124 09:55:25.061183 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.061191 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:25.061196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:25.061262 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:25.092046 1849924 cri.go:89] found id: ""
	I1124 09:55:25.092061 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.092069 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:25.092078 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:25.092089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:25.164204 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:25.164229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:25.180461 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:25.180477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:25.270104 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:25.270114 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:25.270125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:25.349962 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:25.349985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:27.885869 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:27.895923 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:27.895990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:27.923576 1849924 cri.go:89] found id: ""
	I1124 09:55:27.923591 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.923598 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:27.923604 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:27.923660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:27.949384 1849924 cri.go:89] found id: ""
	I1124 09:55:27.949398 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.949405 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:27.949409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:27.949468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:27.974662 1849924 cri.go:89] found id: ""
	I1124 09:55:27.974675 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.974682 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:27.974687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:27.974752 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:28.000014 1849924 cri.go:89] found id: ""
	I1124 09:55:28.000028 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.000035 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:28.000041 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:28.000113 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:28.031383 1849924 cri.go:89] found id: ""
	I1124 09:55:28.031397 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.031404 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:28.031410 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:28.031468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:28.062851 1849924 cri.go:89] found id: ""
	I1124 09:55:28.062872 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.062880 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:28.062886 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:28.062965 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:28.091592 1849924 cri.go:89] found id: ""
	I1124 09:55:28.091608 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.091623 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:28.091633 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:28.091646 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:28.125018 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:28.125035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:28.190729 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:28.190751 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:28.205665 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:28.205681 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:28.285905 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:28.285917 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:28.285927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:30.864245 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:30.876164 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:30.876248 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:30.901572 1849924 cri.go:89] found id: ""
	I1124 09:55:30.901586 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.901593 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:30.901599 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:30.901659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:30.931361 1849924 cri.go:89] found id: ""
	I1124 09:55:30.931374 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.931382 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:30.931388 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:30.931449 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:30.956087 1849924 cri.go:89] found id: ""
	I1124 09:55:30.956101 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.956108 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:30.956114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:30.956174 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:30.981912 1849924 cri.go:89] found id: ""
	I1124 09:55:30.981925 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.981933 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:30.981938 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:30.982013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:31.010764 1849924 cri.go:89] found id: ""
	I1124 09:55:31.010778 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.010804 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:31.010811 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:31.010884 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:31.037094 1849924 cri.go:89] found id: ""
	I1124 09:55:31.037140 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.037146 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:31.037153 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:31.037221 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:31.064060 1849924 cri.go:89] found id: ""
	I1124 09:55:31.064075 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.064092 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:31.064100 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:31.064111 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:31.129432 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:31.129444 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:31.129455 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:31.207603 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:31.207622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:31.246019 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:31.246035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:31.313859 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:31.313882 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:33.829785 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:33.839749 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:33.839813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:33.864810 1849924 cri.go:89] found id: ""
	I1124 09:55:33.864824 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.864831 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:33.864837 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:33.864898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:33.890309 1849924 cri.go:89] found id: ""
	I1124 09:55:33.890324 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.890331 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:33.890336 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:33.890401 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:33.922386 1849924 cri.go:89] found id: ""
	I1124 09:55:33.922399 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.922406 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:33.922412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:33.922473 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:33.947199 1849924 cri.go:89] found id: ""
	I1124 09:55:33.947213 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.947220 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:33.947226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:33.947289 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:33.972195 1849924 cri.go:89] found id: ""
	I1124 09:55:33.972209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.972216 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:33.972222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:33.972294 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:33.997877 1849924 cri.go:89] found id: ""
	I1124 09:55:33.997891 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.997898 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:33.997904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:33.997961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:34.024719 1849924 cri.go:89] found id: ""
	I1124 09:55:34.024733 1849924 logs.go:282] 0 containers: []
	W1124 09:55:34.024741 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:34.024748 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:34.024769 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:34.089874 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:34.089896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:34.104839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:34.104857 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:34.171681 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:34.171691 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:34.171702 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:34.249876 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:34.249896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:36.781512 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:36.791518 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:36.791579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:36.820485 1849924 cri.go:89] found id: ""
	I1124 09:55:36.820500 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.820508 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:36.820514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:36.820589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:36.845963 1849924 cri.go:89] found id: ""
	I1124 09:55:36.845978 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.845985 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:36.845991 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:36.846062 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:36.880558 1849924 cri.go:89] found id: ""
	I1124 09:55:36.880573 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.880580 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:36.880586 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:36.880656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:36.908730 1849924 cri.go:89] found id: ""
	I1124 09:55:36.908745 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.908752 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:36.908769 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:36.908830 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:36.936618 1849924 cri.go:89] found id: ""
	I1124 09:55:36.936634 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.936646 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:36.936662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:36.936724 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:36.961091 1849924 cri.go:89] found id: ""
	I1124 09:55:36.961134 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.961142 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:36.961148 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:36.961215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:36.986263 1849924 cri.go:89] found id: ""
	I1124 09:55:36.986278 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.986285 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:36.986293 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:36.986304 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:37.061090 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:37.061120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:37.076634 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:37.076652 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:37.144407 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:37.144417 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:37.144427 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:37.223887 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:37.223907 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:39.759307 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:39.769265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:39.769325 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:39.795092 1849924 cri.go:89] found id: ""
	I1124 09:55:39.795107 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.795114 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:39.795120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:39.795180 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:39.821381 1849924 cri.go:89] found id: ""
	I1124 09:55:39.821396 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.821403 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:39.821408 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:39.821480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:39.850195 1849924 cri.go:89] found id: ""
	I1124 09:55:39.850209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.850224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:39.850232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:39.850291 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:39.875376 1849924 cri.go:89] found id: ""
	I1124 09:55:39.875391 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.875398 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:39.875404 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:39.875466 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:39.904124 1849924 cri.go:89] found id: ""
	I1124 09:55:39.904138 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.904146 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:39.904151 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:39.904222 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:39.930807 1849924 cri.go:89] found id: ""
	I1124 09:55:39.930820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.930827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:39.930832 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:39.930889 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:39.960435 1849924 cri.go:89] found id: ""
	I1124 09:55:39.960449 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.960456 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:39.960464 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:39.960475 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:40.030261 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:40.030271 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:40.030283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:40.109590 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:40.109615 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:40.143688 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:40.143704 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:40.212394 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:40.212412 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:42.734304 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:42.744432 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:42.744494 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:42.769686 1849924 cri.go:89] found id: ""
	I1124 09:55:42.769701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.769708 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:42.769714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:42.769774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:42.794368 1849924 cri.go:89] found id: ""
	I1124 09:55:42.794381 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.794388 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:42.794394 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:42.794460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:42.819036 1849924 cri.go:89] found id: ""
	I1124 09:55:42.819051 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.819058 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:42.819067 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:42.819126 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:42.845429 1849924 cri.go:89] found id: ""
	I1124 09:55:42.845444 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.845452 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:42.845457 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:42.845516 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:42.873391 1849924 cri.go:89] found id: ""
	I1124 09:55:42.873405 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.873412 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:42.873418 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:42.873483 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:42.899532 1849924 cri.go:89] found id: ""
	I1124 09:55:42.899560 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.899567 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:42.899575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:42.899642 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:42.925159 1849924 cri.go:89] found id: ""
	I1124 09:55:42.925173 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.925180 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:42.925188 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:42.925215 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:43.003079 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:43.003104 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:43.041964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:43.041990 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:43.120202 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:43.120224 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:43.143097 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:43.143191 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:43.219616 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:45.719895 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:45.730306 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:45.730370 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:45.755318 1849924 cri.go:89] found id: ""
	I1124 09:55:45.755333 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.755341 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:45.755353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:45.755413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:45.781283 1849924 cri.go:89] found id: ""
	I1124 09:55:45.781299 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.781305 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:45.781311 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:45.781369 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:45.807468 1849924 cri.go:89] found id: ""
	I1124 09:55:45.807482 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.807489 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:45.807495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:45.807554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:45.836726 1849924 cri.go:89] found id: ""
	I1124 09:55:45.836741 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.836749 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:45.836754 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:45.836813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:45.862613 1849924 cri.go:89] found id: ""
	I1124 09:55:45.862628 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.862635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:45.862641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:45.862702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:45.894972 1849924 cri.go:89] found id: ""
	I1124 09:55:45.894987 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.894994 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:45.895000 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:45.895067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:45.922194 1849924 cri.go:89] found id: ""
	I1124 09:55:45.922209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.922217 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:45.922224 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:45.922237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:45.954912 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:45.954930 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:46.021984 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:46.022004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:46.037849 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:46.037865 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:46.101460 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:46.101473 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:46.101483 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:48.688081 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:48.698194 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:48.698260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:48.724390 1849924 cri.go:89] found id: ""
	I1124 09:55:48.724404 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.724411 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:48.724416 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:48.724480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:48.749323 1849924 cri.go:89] found id: ""
	I1124 09:55:48.749337 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.749344 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:48.749350 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:48.749406 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:48.774542 1849924 cri.go:89] found id: ""
	I1124 09:55:48.774555 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.774562 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:48.774569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:48.774635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:48.799553 1849924 cri.go:89] found id: ""
	I1124 09:55:48.799568 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.799575 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:48.799580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:48.799637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:48.824768 1849924 cri.go:89] found id: ""
	I1124 09:55:48.824782 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.824789 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:48.824794 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:48.824849 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:48.853654 1849924 cri.go:89] found id: ""
	I1124 09:55:48.853668 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.853674 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:48.853680 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:48.853738 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:48.880137 1849924 cri.go:89] found id: ""
	I1124 09:55:48.880151 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.880158 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:48.880166 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:48.880178 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:48.943985 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:48.943998 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:48.944008 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:49.021387 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:49.021407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:49.054551 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:49.054566 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:49.124670 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:49.124690 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.640001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:51.650264 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:51.650326 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:51.675421 1849924 cri.go:89] found id: ""
	I1124 09:55:51.675434 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.675442 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:51.675447 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:51.675510 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:51.703552 1849924 cri.go:89] found id: ""
	I1124 09:55:51.703566 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.703573 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:51.703578 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:51.703637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:51.731457 1849924 cri.go:89] found id: ""
	I1124 09:55:51.731470 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.731477 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:51.731483 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:51.731540 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:51.757515 1849924 cri.go:89] found id: ""
	I1124 09:55:51.757529 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.757536 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:51.757541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:51.757604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:51.787493 1849924 cri.go:89] found id: ""
	I1124 09:55:51.787507 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.787514 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:51.787520 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:51.787579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:51.813153 1849924 cri.go:89] found id: ""
	I1124 09:55:51.813166 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.813173 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:51.813179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:51.813250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:51.845222 1849924 cri.go:89] found id: ""
	I1124 09:55:51.845235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.845244 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:51.845252 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:51.845272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.860214 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:51.860236 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:51.924176 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:51.924186 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:51.924196 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:52.001608 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:52.001629 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:52.037448 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:52.037466 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.609480 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:54.620161 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:54.620223 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:54.649789 1849924 cri.go:89] found id: ""
	I1124 09:55:54.649803 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.649810 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:54.649816 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:54.649879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:54.677548 1849924 cri.go:89] found id: ""
	I1124 09:55:54.677561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.677568 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:54.677573 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:54.677635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:54.707602 1849924 cri.go:89] found id: ""
	I1124 09:55:54.707616 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.707623 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:54.707628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:54.707687 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:54.737369 1849924 cri.go:89] found id: ""
	I1124 09:55:54.737382 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.737390 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:54.737396 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:54.737460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:54.764514 1849924 cri.go:89] found id: ""
	I1124 09:55:54.764528 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.764536 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:54.764541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:54.764599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:54.789898 1849924 cri.go:89] found id: ""
	I1124 09:55:54.789912 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.789920 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:54.789925 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:54.789986 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:54.815652 1849924 cri.go:89] found id: ""
	I1124 09:55:54.815665 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.815672 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:54.815681 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:54.815691 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.882879 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:54.882901 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:54.898593 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:54.898622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:54.967134 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:54.967146 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:54.967157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:55.046870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:55.046891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.578091 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:57.588580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:57.588643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:57.617411 1849924 cri.go:89] found id: ""
	I1124 09:55:57.617425 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.617432 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:57.617437 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:57.617503 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:57.642763 1849924 cri.go:89] found id: ""
	I1124 09:55:57.642777 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.642784 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:57.642789 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:57.642848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:57.668484 1849924 cri.go:89] found id: ""
	I1124 09:55:57.668499 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.668506 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:57.668512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:57.668571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:57.694643 1849924 cri.go:89] found id: ""
	I1124 09:55:57.694657 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.694664 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:57.694670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:57.694730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:57.720049 1849924 cri.go:89] found id: ""
	I1124 09:55:57.720063 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.720070 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:57.720075 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:57.720140 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:57.748016 1849924 cri.go:89] found id: ""
	I1124 09:55:57.748029 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.748036 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:57.748044 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:57.748104 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:57.774253 1849924 cri.go:89] found id: ""
	I1124 09:55:57.774266 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.774273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:57.774281 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:57.774295 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:57.789236 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:57.789253 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:57.851207 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:57.851217 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:57.851229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:57.927927 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:57.927946 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.959058 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:57.959075 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.529440 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:00.539970 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:00.540034 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:00.566556 1849924 cri.go:89] found id: ""
	I1124 09:56:00.566570 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.566583 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:00.566589 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:00.566659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:00.596278 1849924 cri.go:89] found id: ""
	I1124 09:56:00.596291 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.596298 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:00.596304 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:00.596362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:00.623580 1849924 cri.go:89] found id: ""
	I1124 09:56:00.623593 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.623600 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:00.623605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:00.623664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:00.648991 1849924 cri.go:89] found id: ""
	I1124 09:56:00.649006 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.649012 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:00.649018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:00.649078 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:00.676614 1849924 cri.go:89] found id: ""
	I1124 09:56:00.676628 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.676635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:00.676641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:00.676706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:00.701480 1849924 cri.go:89] found id: ""
	I1124 09:56:00.701502 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.701509 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:00.701516 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:00.701575 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:00.727550 1849924 cri.go:89] found id: ""
	I1124 09:56:00.727563 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.727570 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:00.727578 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:00.727589 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:00.755964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:00.755980 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.822018 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:00.822039 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:00.837252 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:00.837268 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:00.901931 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:00.901942 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:00.901957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.481859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:03.493893 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:03.493961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:03.522628 1849924 cri.go:89] found id: ""
	I1124 09:56:03.522643 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.522650 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:03.522656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:03.522716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:03.551454 1849924 cri.go:89] found id: ""
	I1124 09:56:03.551468 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.551475 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:03.551480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:03.551539 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:03.580931 1849924 cri.go:89] found id: ""
	I1124 09:56:03.580945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.580951 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:03.580957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:03.581015 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:03.607826 1849924 cri.go:89] found id: ""
	I1124 09:56:03.607840 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.607846 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:03.607852 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:03.607923 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:03.637843 1849924 cri.go:89] found id: ""
	I1124 09:56:03.637857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.637865 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:03.637870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:03.637931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:03.665156 1849924 cri.go:89] found id: ""
	I1124 09:56:03.665170 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.665176 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:03.665182 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:03.665250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:03.690810 1849924 cri.go:89] found id: ""
	I1124 09:56:03.690824 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.690831 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:03.690839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:03.690849 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:03.755803 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:03.755813 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:03.755823 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.832793 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:03.832816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:03.860351 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:03.860367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:03.930446 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:03.930465 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.445925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:06.457385 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:06.457451 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:06.490931 1849924 cri.go:89] found id: ""
	I1124 09:56:06.490944 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.490951 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:06.490956 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:06.491013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:06.529326 1849924 cri.go:89] found id: ""
	I1124 09:56:06.529340 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.529347 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:06.529353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:06.529409 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:06.554888 1849924 cri.go:89] found id: ""
	I1124 09:56:06.554914 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.554921 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:06.554926 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:06.554984 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:06.579750 1849924 cri.go:89] found id: ""
	I1124 09:56:06.579764 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.579771 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:06.579781 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:06.579839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:06.605075 1849924 cri.go:89] found id: ""
	I1124 09:56:06.605098 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.605134 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:06.605140 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:06.605207 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:06.630281 1849924 cri.go:89] found id: ""
	I1124 09:56:06.630295 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.630302 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:06.630307 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:06.630366 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:06.655406 1849924 cri.go:89] found id: ""
	I1124 09:56:06.655427 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.655435 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:06.655442 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:06.655453 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:06.722316 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:06.722335 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.737174 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:06.737190 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:06.801018 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:06.801032 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:06.801042 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:06.882225 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:06.882254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.412996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:09.423266 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:09.423332 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:09.452270 1849924 cri.go:89] found id: ""
	I1124 09:56:09.452283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.452290 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:09.452295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:09.452353 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:09.484931 1849924 cri.go:89] found id: ""
	I1124 09:56:09.484945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.484952 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:09.484957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:09.485030 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:09.526676 1849924 cri.go:89] found id: ""
	I1124 09:56:09.526689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.526696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:09.526701 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:09.526758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:09.551815 1849924 cri.go:89] found id: ""
	I1124 09:56:09.551828 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.551835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:09.551841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:09.551904 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:09.580143 1849924 cri.go:89] found id: ""
	I1124 09:56:09.580159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.580167 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:09.580173 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:09.580233 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:09.608255 1849924 cri.go:89] found id: ""
	I1124 09:56:09.608269 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.608276 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:09.608281 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:09.608338 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:09.638262 1849924 cri.go:89] found id: ""
	I1124 09:56:09.638276 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.638283 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:09.638291 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:09.638301 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:09.713707 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:09.713728 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.741202 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:09.741218 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:09.806578 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:09.806598 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:09.821839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:09.821855 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:09.888815 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.390494 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:12.400491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:12.400550 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:12.426496 1849924 cri.go:89] found id: ""
	I1124 09:56:12.426511 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.426517 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:12.426524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:12.426587 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:12.457770 1849924 cri.go:89] found id: ""
	I1124 09:56:12.457794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.457801 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:12.457807 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:12.457873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:12.489154 1849924 cri.go:89] found id: ""
	I1124 09:56:12.489167 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.489174 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:12.489179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:12.489250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:12.524997 1849924 cri.go:89] found id: ""
	I1124 09:56:12.525010 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.525018 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:12.525024 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:12.525090 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:12.550538 1849924 cri.go:89] found id: ""
	I1124 09:56:12.550561 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.550569 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:12.550574 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:12.550650 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:12.575990 1849924 cri.go:89] found id: ""
	I1124 09:56:12.576011 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.576018 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:12.576025 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:12.576095 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:12.602083 1849924 cri.go:89] found id: ""
	I1124 09:56:12.602097 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.602104 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:12.602112 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:12.602125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:12.667794 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:12.667814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:12.682815 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:12.682832 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:12.749256 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.749266 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:12.749276 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:12.823882 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:12.823902 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.353890 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:15.364319 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:15.364380 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:15.389759 1849924 cri.go:89] found id: ""
	I1124 09:56:15.389772 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.389786 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:15.389792 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:15.389850 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:15.414921 1849924 cri.go:89] found id: ""
	I1124 09:56:15.414936 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.414943 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:15.414948 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:15.415008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:15.444228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.444242 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.444249 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:15.444254 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:15.444314 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:15.476734 1849924 cri.go:89] found id: ""
	I1124 09:56:15.476747 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.476763 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:15.476768 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:15.476836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:15.507241 1849924 cri.go:89] found id: ""
	I1124 09:56:15.507254 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.507261 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:15.507275 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:15.507339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:15.544058 1849924 cri.go:89] found id: ""
	I1124 09:56:15.544081 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.544089 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:15.544094 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:15.544162 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:15.571228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.571241 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.571248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:15.571261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:15.571272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:15.646647 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:15.646667 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.674311 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:15.674326 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:15.739431 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:15.739451 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:15.754640 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:15.754662 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:15.821471 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.321745 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:18.331603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:18.331664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:18.357195 1849924 cri.go:89] found id: ""
	I1124 09:56:18.357215 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.357223 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:18.357229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:18.357292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:18.387513 1849924 cri.go:89] found id: ""
	I1124 09:56:18.387527 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.387534 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:18.387540 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:18.387600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:18.414561 1849924 cri.go:89] found id: ""
	I1124 09:56:18.414583 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.414590 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:18.414596 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:18.414670 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:18.441543 1849924 cri.go:89] found id: ""
	I1124 09:56:18.441557 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.441564 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:18.441569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:18.441627 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:18.481911 1849924 cri.go:89] found id: ""
	I1124 09:56:18.481924 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.481931 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:18.481937 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:18.481995 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:18.512577 1849924 cri.go:89] found id: ""
	I1124 09:56:18.512589 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.512596 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:18.512601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:18.512660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:18.542006 1849924 cri.go:89] found id: ""
	I1124 09:56:18.542021 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.542028 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:18.542035 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:18.542045 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:18.572217 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:18.572233 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:18.637845 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:18.637863 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:18.653892 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:18.653908 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:18.720870 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.720881 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:18.720891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.300479 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:21.310612 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:21.310716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:21.339787 1849924 cri.go:89] found id: ""
	I1124 09:56:21.339801 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.339808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:21.339819 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:21.339879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:21.364577 1849924 cri.go:89] found id: ""
	I1124 09:56:21.364601 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.364609 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:21.364615 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:21.364688 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:21.391798 1849924 cri.go:89] found id: ""
	I1124 09:56:21.391852 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.391859 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:21.391865 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:21.391939 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:21.417518 1849924 cri.go:89] found id: ""
	I1124 09:56:21.417532 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.417539 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:21.417545 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:21.417600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:21.443079 1849924 cri.go:89] found id: ""
	I1124 09:56:21.443092 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.443099 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:21.443104 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:21.443164 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:21.483649 1849924 cri.go:89] found id: ""
	I1124 09:56:21.483663 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.483685 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:21.483691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:21.483758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:21.513352 1849924 cri.go:89] found id: ""
	I1124 09:56:21.513367 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.513374 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:21.513383 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:21.513445 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:21.583074 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:21.583095 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:21.598415 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:21.598432 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:21.661326 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:21.661336 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:21.661348 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.742506 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:21.742527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:24.271763 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:24.281983 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:24.282044 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:24.313907 1849924 cri.go:89] found id: ""
	I1124 09:56:24.313920 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.313928 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:24.313934 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:24.314006 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:24.338982 1849924 cri.go:89] found id: ""
	I1124 09:56:24.338996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.339003 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:24.339009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:24.339067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:24.365195 1849924 cri.go:89] found id: ""
	I1124 09:56:24.365209 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.365216 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:24.365222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:24.365292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:24.390215 1849924 cri.go:89] found id: ""
	I1124 09:56:24.390228 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.390235 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:24.390241 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:24.390299 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:24.415458 1849924 cri.go:89] found id: ""
	I1124 09:56:24.415472 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.415479 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:24.415484 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:24.415544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:24.442483 1849924 cri.go:89] found id: ""
	I1124 09:56:24.442497 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.442504 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:24.442510 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:24.442571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:24.478898 1849924 cri.go:89] found id: ""
	I1124 09:56:24.478912 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.478919 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:24.478926 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:24.478936 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:24.559295 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:24.559320 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:24.575521 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:24.575538 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:24.643962 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:24.643974 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:24.643985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:24.721863 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:24.721883 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.252684 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:27.262544 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:27.262604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:27.288190 1849924 cri.go:89] found id: ""
	I1124 09:56:27.288203 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.288211 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:27.288216 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:27.288276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:27.315955 1849924 cri.go:89] found id: ""
	I1124 09:56:27.315975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.315983 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:27.315988 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:27.316050 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:27.341613 1849924 cri.go:89] found id: ""
	I1124 09:56:27.341626 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.341633 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:27.341639 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:27.341699 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:27.366677 1849924 cri.go:89] found id: ""
	I1124 09:56:27.366690 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.366697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:27.366703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:27.366768 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:27.392001 1849924 cri.go:89] found id: ""
	I1124 09:56:27.392015 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.392021 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:27.392027 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:27.392085 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:27.419410 1849924 cri.go:89] found id: ""
	I1124 09:56:27.419430 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.419436 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:27.419442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:27.419501 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:27.444780 1849924 cri.go:89] found id: ""
	I1124 09:56:27.444794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.444801 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:27.444809 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:27.444824 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.478836 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:27.478853 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:27.552795 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:27.552814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:27.567935 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:27.567988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:27.630838 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:27.630849 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:27.630859 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:30.212620 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:30.223248 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:30.223313 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:30.249863 1849924 cri.go:89] found id: ""
	I1124 09:56:30.249876 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.249883 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:30.249888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:30.249947 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:30.275941 1849924 cri.go:89] found id: ""
	I1124 09:56:30.275955 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.275974 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:30.275980 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:30.276053 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:30.300914 1849924 cri.go:89] found id: ""
	I1124 09:56:30.300928 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.300944 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:30.300950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:30.301016 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:30.325980 1849924 cri.go:89] found id: ""
	I1124 09:56:30.325994 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.326011 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:30.326018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:30.326089 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:30.352023 1849924 cri.go:89] found id: ""
	I1124 09:56:30.352038 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.352045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:30.352050 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:30.352121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:30.379711 1849924 cri.go:89] found id: ""
	I1124 09:56:30.379724 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.379731 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:30.379736 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:30.379801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:30.409210 1849924 cri.go:89] found id: ""
	I1124 09:56:30.409224 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.409232 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:30.409240 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:30.409251 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:30.437995 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:30.438012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:30.507429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:30.507448 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:30.525911 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:30.525927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:30.589196 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:30.589210 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:30.589220 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:33.172621 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:33.182671 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:33.182730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:33.211695 1849924 cri.go:89] found id: ""
	I1124 09:56:33.211709 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.211716 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:33.211721 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:33.211779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:33.237798 1849924 cri.go:89] found id: ""
	I1124 09:56:33.237811 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.237818 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:33.237824 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:33.237885 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:33.262147 1849924 cri.go:89] found id: ""
	I1124 09:56:33.262160 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.262167 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:33.262172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:33.262230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:33.286667 1849924 cri.go:89] found id: ""
	I1124 09:56:33.286681 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.286690 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:33.286696 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:33.286754 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:33.311109 1849924 cri.go:89] found id: ""
	I1124 09:56:33.311122 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.311129 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:33.311135 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:33.311198 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:33.336757 1849924 cri.go:89] found id: ""
	I1124 09:56:33.336781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.336790 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:33.336796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:33.336864 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:33.365159 1849924 cri.go:89] found id: ""
	I1124 09:56:33.365172 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.365179 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:33.365186 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:33.365197 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:33.393002 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:33.393017 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:33.457704 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:33.457724 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:33.473674 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:33.473700 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:33.547251 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:33.547261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:33.547274 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.125180 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:36.135549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:36.135611 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:36.161892 1849924 cri.go:89] found id: ""
	I1124 09:56:36.161906 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.161913 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:36.161919 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:36.161980 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:36.192254 1849924 cri.go:89] found id: ""
	I1124 09:56:36.192268 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.192275 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:36.192280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:36.192341 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:36.219675 1849924 cri.go:89] found id: ""
	I1124 09:56:36.219689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.219696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:36.219702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:36.219760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:36.249674 1849924 cri.go:89] found id: ""
	I1124 09:56:36.249688 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.249695 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:36.249700 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:36.249756 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:36.276115 1849924 cri.go:89] found id: ""
	I1124 09:56:36.276129 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.276136 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:36.276141 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:36.276199 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:36.303472 1849924 cri.go:89] found id: ""
	I1124 09:56:36.303486 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.303494 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:36.303499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:36.303558 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:36.332774 1849924 cri.go:89] found id: ""
	I1124 09:56:36.332789 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.332796 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:36.332804 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:36.332814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.410262 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:36.410282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:36.442608 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:36.442625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:36.517228 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:36.517247 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:36.532442 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:36.532459 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:36.598941 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.099623 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:39.110286 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:39.110347 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:39.135094 1849924 cri.go:89] found id: ""
	I1124 09:56:39.135108 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.135115 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:39.135120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:39.135184 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:39.161664 1849924 cri.go:89] found id: ""
	I1124 09:56:39.161678 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.161685 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:39.161691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:39.161749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:39.186843 1849924 cri.go:89] found id: ""
	I1124 09:56:39.186857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.186865 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:39.186870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:39.186930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:39.212864 1849924 cri.go:89] found id: ""
	I1124 09:56:39.212878 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.212889 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:39.212895 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:39.212953 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:39.243329 1849924 cri.go:89] found id: ""
	I1124 09:56:39.243343 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.243350 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:39.243356 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:39.243421 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:39.268862 1849924 cri.go:89] found id: ""
	I1124 09:56:39.268875 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.268883 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:39.268888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:39.268950 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:39.295966 1849924 cri.go:89] found id: ""
	I1124 09:56:39.295979 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.295986 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:39.295993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:39.296004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:39.327310 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:39.327325 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:39.392831 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:39.392850 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:39.407904 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:39.407920 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:39.476692 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.476716 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:39.476729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.055953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:42.067687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:42.067767 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:42.096948 1849924 cri.go:89] found id: ""
	I1124 09:56:42.096963 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.096971 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:42.096977 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:42.097039 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:42.128766 1849924 cri.go:89] found id: ""
	I1124 09:56:42.128781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.128789 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:42.128795 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:42.128861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:42.160266 1849924 cri.go:89] found id: ""
	I1124 09:56:42.160283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.160291 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:42.160297 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:42.160368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:42.191973 1849924 cri.go:89] found id: ""
	I1124 09:56:42.191996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.192004 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:42.192011 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:42.192081 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:42.226204 1849924 cri.go:89] found id: ""
	I1124 09:56:42.226218 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.226226 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:42.226232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:42.226316 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:42.253907 1849924 cri.go:89] found id: ""
	I1124 09:56:42.253922 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.253929 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:42.253935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:42.253998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:42.282770 1849924 cri.go:89] found id: ""
	I1124 09:56:42.282786 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.282793 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:42.282800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:42.282811 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:42.298712 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:42.298729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:42.363239 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:42.363249 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:42.363260 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.437643 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:42.437663 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:42.475221 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:42.475237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:45.048529 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:45.067334 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:45.067432 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:45.099636 1849924 cri.go:89] found id: ""
	I1124 09:56:45.099652 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.099659 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:45.099666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:45.099762 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:45.132659 1849924 cri.go:89] found id: ""
	I1124 09:56:45.132693 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.132701 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:45.132708 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:45.132792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:45.169282 1849924 cri.go:89] found id: ""
	I1124 09:56:45.169306 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.169314 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:45.169320 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:45.169398 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:45.226517 1849924 cri.go:89] found id: ""
	I1124 09:56:45.226533 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.226542 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:45.226548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:45.226626 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:45.265664 1849924 cri.go:89] found id: ""
	I1124 09:56:45.265680 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.265687 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:45.265693 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:45.265759 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:45.298503 1849924 cri.go:89] found id: ""
	I1124 09:56:45.298517 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.298525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:45.298531 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:45.298599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:45.329403 1849924 cri.go:89] found id: ""
	I1124 09:56:45.329436 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.329445 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:45.329453 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:45.329464 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:45.345344 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:45.345361 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:45.412742 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:45.412752 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:45.412763 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:45.493978 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:45.493998 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:45.531425 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:45.531441 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.098018 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:48.108764 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:48.108836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:48.134307 1849924 cri.go:89] found id: ""
	I1124 09:56:48.134321 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.134328 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:48.134333 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:48.134390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:48.159252 1849924 cri.go:89] found id: ""
	I1124 09:56:48.159266 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.159273 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:48.159279 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:48.159337 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:48.184464 1849924 cri.go:89] found id: ""
	I1124 09:56:48.184478 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.184496 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:48.184507 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:48.184589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:48.209500 1849924 cri.go:89] found id: ""
	I1124 09:56:48.209513 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.209520 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:48.209526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:48.209590 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:48.236025 1849924 cri.go:89] found id: ""
	I1124 09:56:48.236039 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.236045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:48.236051 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:48.236121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:48.262196 1849924 cri.go:89] found id: ""
	I1124 09:56:48.262210 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.262216 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:48.262222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:48.262285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:48.286684 1849924 cri.go:89] found id: ""
	I1124 09:56:48.286698 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.286705 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:48.286712 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:48.286725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.354155 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:48.354174 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:48.369606 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:48.369625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:48.436183 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:48.436193 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:48.436207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:48.516667 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:48.516688 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.047020 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:51.057412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:51.057477 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:51.087137 1849924 cri.go:89] found id: ""
	I1124 09:56:51.087159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.087167 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:51.087172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:51.087241 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:51.115003 1849924 cri.go:89] found id: ""
	I1124 09:56:51.115018 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.115025 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:51.115031 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:51.115093 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:51.144604 1849924 cri.go:89] found id: ""
	I1124 09:56:51.144622 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.144631 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:51.144638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:51.144706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:51.172310 1849924 cri.go:89] found id: ""
	I1124 09:56:51.172323 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.172338 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:51.172345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:51.172413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:51.200354 1849924 cri.go:89] found id: ""
	I1124 09:56:51.200376 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.200384 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:51.200390 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:51.200463 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:51.225889 1849924 cri.go:89] found id: ""
	I1124 09:56:51.225903 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.225911 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:51.225917 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:51.225974 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:51.250937 1849924 cri.go:89] found id: ""
	I1124 09:56:51.250950 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.250956 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:51.250972 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:51.250984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.281935 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:51.281951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:51.346955 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:51.346975 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:51.362412 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:51.362428 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:51.424513 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:51.424523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:51.424534 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.006160 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:54.017499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:54.017565 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:54.048035 1849924 cri.go:89] found id: ""
	I1124 09:56:54.048049 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.048056 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:54.048062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:54.048117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:54.075193 1849924 cri.go:89] found id: ""
	I1124 09:56:54.075207 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.075214 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:54.075220 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:54.075278 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:54.101853 1849924 cri.go:89] found id: ""
	I1124 09:56:54.101868 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.101875 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:54.101880 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:54.101938 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:54.128585 1849924 cri.go:89] found id: ""
	I1124 09:56:54.128600 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.128608 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:54.128614 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:54.128673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:54.154726 1849924 cri.go:89] found id: ""
	I1124 09:56:54.154742 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.154750 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:54.154756 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:54.154819 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:54.180936 1849924 cri.go:89] found id: ""
	I1124 09:56:54.180975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.180984 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:54.180990 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:54.181070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:54.209038 1849924 cri.go:89] found id: ""
	I1124 09:56:54.209060 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.209067 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:54.209075 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:54.209085 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:54.279263 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:54.279289 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:54.295105 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:54.295131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:54.367337 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:54.367348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:54.367360 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.442973 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:54.442995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:56.980627 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:56.990375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:56.990434 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:57.016699 1849924 cri.go:89] found id: ""
	I1124 09:56:57.016713 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.016720 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:57.016726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:57.016789 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:57.042924 1849924 cri.go:89] found id: ""
	I1124 09:56:57.042938 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.042945 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:57.042950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:57.043009 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:57.071972 1849924 cri.go:89] found id: ""
	I1124 09:56:57.071986 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.071993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:57.071998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:57.072057 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:57.097765 1849924 cri.go:89] found id: ""
	I1124 09:56:57.097780 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.097789 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:57.097796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:57.097861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:57.124764 1849924 cri.go:89] found id: ""
	I1124 09:56:57.124778 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.124796 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:57.124802 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:57.124871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:57.151558 1849924 cri.go:89] found id: ""
	I1124 09:56:57.151584 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.151591 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:57.151597 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:57.151667 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:57.178335 1849924 cri.go:89] found id: ""
	I1124 09:56:57.178348 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.178355 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:57.178372 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:57.178383 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:57.253968 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:57.253988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:57.284364 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:57.284380 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:57.349827 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:57.349847 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:57.364617 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:57.364633 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:57.425688 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:59.926489 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:59.936801 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:59.936870 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:59.961715 1849924 cri.go:89] found id: ""
	I1124 09:56:59.961728 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.961735 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:59.961741 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:59.961801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:59.990466 1849924 cri.go:89] found id: ""
	I1124 09:56:59.990480 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.990488 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:59.990494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:59.990554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:00.129137 1849924 cri.go:89] found id: ""
	I1124 09:57:00.129161 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.129169 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:00.129175 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:00.129257 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:00.211462 1849924 cri.go:89] found id: ""
	I1124 09:57:00.211478 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.211490 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:00.211506 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:00.211593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:00.274315 1849924 cri.go:89] found id: ""
	I1124 09:57:00.274338 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.274346 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:00.274363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:00.274453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:00.321199 1849924 cri.go:89] found id: ""
	I1124 09:57:00.321233 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.321241 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:00.321247 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:00.321324 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:00.372845 1849924 cri.go:89] found id: ""
	I1124 09:57:00.372861 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.372869 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:00.372878 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:00.372889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:00.444462 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:00.444485 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:00.465343 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:00.465381 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:00.553389 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:00.553402 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:00.553418 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:00.632199 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:00.632219 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:03.162773 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:03.173065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:03.173150 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:03.200418 1849924 cri.go:89] found id: ""
	I1124 09:57:03.200431 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.200439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:03.200444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:03.200502 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:03.227983 1849924 cri.go:89] found id: ""
	I1124 09:57:03.227997 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.228004 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:03.228009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:03.228070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:03.257554 1849924 cri.go:89] found id: ""
	I1124 09:57:03.257568 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.257575 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:03.257581 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:03.257639 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:03.283198 1849924 cri.go:89] found id: ""
	I1124 09:57:03.283210 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.283217 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:03.283223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:03.283280 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:03.307981 1849924 cri.go:89] found id: ""
	I1124 09:57:03.307994 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.308002 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:03.308007 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:03.308063 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:03.337021 1849924 cri.go:89] found id: ""
	I1124 09:57:03.337035 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.337042 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:03.337047 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:03.337130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:03.362116 1849924 cri.go:89] found id: ""
	I1124 09:57:03.362130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.362137 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:03.362144 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:03.362155 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:03.427932 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:03.427951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:03.442952 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:03.442968 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:03.527978 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:03.527989 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:03.528002 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:03.603993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:03.604012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.134966 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:06.147607 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:06.147673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:06.173217 1849924 cri.go:89] found id: ""
	I1124 09:57:06.173231 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.173238 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:06.173243 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:06.173302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:06.203497 1849924 cri.go:89] found id: ""
	I1124 09:57:06.203511 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.203518 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:06.203524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:06.203581 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:06.232192 1849924 cri.go:89] found id: ""
	I1124 09:57:06.232205 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.232212 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:06.232219 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:06.232276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:06.261698 1849924 cri.go:89] found id: ""
	I1124 09:57:06.261711 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.261717 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:06.261723 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:06.261779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:06.286623 1849924 cri.go:89] found id: ""
	I1124 09:57:06.286642 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.286650 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:06.286656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:06.286717 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:06.316085 1849924 cri.go:89] found id: ""
	I1124 09:57:06.316098 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.316105 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:06.316110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:06.316169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:06.344243 1849924 cri.go:89] found id: ""
	I1124 09:57:06.344257 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.344264 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:06.344273 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:06.344283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.375793 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:06.375809 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:06.441133 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:06.441160 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:06.457259 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:06.457282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:06.534017 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:06.534028 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:06.534040 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.110740 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:09.122421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:09.122484 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:09.148151 1849924 cri.go:89] found id: ""
	I1124 09:57:09.148165 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.148172 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:09.148177 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:09.148235 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:09.173265 1849924 cri.go:89] found id: ""
	I1124 09:57:09.173279 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.173288 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:09.173295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:09.173357 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:09.198363 1849924 cri.go:89] found id: ""
	I1124 09:57:09.198377 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.198384 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:09.198389 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:09.198447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:09.224567 1849924 cri.go:89] found id: ""
	I1124 09:57:09.224581 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.224588 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:09.224594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:09.224652 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:09.249182 1849924 cri.go:89] found id: ""
	I1124 09:57:09.249195 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.249205 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:09.249210 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:09.249281 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:09.274039 1849924 cri.go:89] found id: ""
	I1124 09:57:09.274053 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.274060 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:09.274065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:09.274125 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:09.299730 1849924 cri.go:89] found id: ""
	I1124 09:57:09.299744 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.299751 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:09.299758 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:09.299770 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:09.364094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:09.364105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:09.364120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.441482 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:09.441504 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:09.479944 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:09.479961 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:09.549349 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:09.549367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:12.064927 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:12.075315 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:12.075376 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:12.103644 1849924 cri.go:89] found id: ""
	I1124 09:57:12.103658 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.103665 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:12.103670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:12.103774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:12.129120 1849924 cri.go:89] found id: ""
	I1124 09:57:12.129134 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.129141 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:12.129147 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:12.129215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:12.156010 1849924 cri.go:89] found id: ""
	I1124 09:57:12.156024 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.156031 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:12.156036 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:12.156094 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:12.184275 1849924 cri.go:89] found id: ""
	I1124 09:57:12.184289 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.184296 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:12.184301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:12.184362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:12.214700 1849924 cri.go:89] found id: ""
	I1124 09:57:12.214713 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.214726 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:12.214732 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:12.214792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:12.239546 1849924 cri.go:89] found id: ""
	I1124 09:57:12.239559 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.239566 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:12.239572 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:12.239635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:12.264786 1849924 cri.go:89] found id: ""
	I1124 09:57:12.264800 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.264806 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:12.264814 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:12.264826 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:12.324457 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:12.324467 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:12.324477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:12.401396 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:12.401417 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:12.432520 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:12.432535 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:12.502857 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:12.502877 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.018809 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:15.038661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:15.038741 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:15.069028 1849924 cri.go:89] found id: ""
	I1124 09:57:15.069043 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.069050 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:15.069056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:15.069139 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:15.096495 1849924 cri.go:89] found id: ""
	I1124 09:57:15.096513 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.096521 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:15.096526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:15.096593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:15.125417 1849924 cri.go:89] found id: ""
	I1124 09:57:15.125430 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.125438 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:15.125444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:15.125508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:15.152259 1849924 cri.go:89] found id: ""
	I1124 09:57:15.152274 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.152281 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:15.152287 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:15.152348 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:15.178920 1849924 cri.go:89] found id: ""
	I1124 09:57:15.178934 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.178942 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:15.178947 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:15.179024 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:15.207630 1849924 cri.go:89] found id: ""
	I1124 09:57:15.207643 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.207650 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:15.207656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:15.207715 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:15.237971 1849924 cri.go:89] found id: ""
	I1124 09:57:15.237985 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.237992 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:15.238000 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:15.238011 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:15.305169 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:15.305187 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.320240 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:15.320257 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:15.393546 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:15.393556 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:15.393592 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:15.470159 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:15.470179 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:18.001255 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:18.013421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:18.013488 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:18.040787 1849924 cri.go:89] found id: ""
	I1124 09:57:18.040801 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.040808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:18.040814 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:18.040873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:18.066460 1849924 cri.go:89] found id: ""
	I1124 09:57:18.066475 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.066482 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:18.066487 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:18.066544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:18.093970 1849924 cri.go:89] found id: ""
	I1124 09:57:18.093983 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.093990 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:18.093998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:18.094070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:18.119292 1849924 cri.go:89] found id: ""
	I1124 09:57:18.119306 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.119312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:18.119318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:18.119375 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:18.144343 1849924 cri.go:89] found id: ""
	I1124 09:57:18.144356 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.144363 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:18.144369 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:18.144428 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:18.176349 1849924 cri.go:89] found id: ""
	I1124 09:57:18.176362 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.176369 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:18.176375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:18.176435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:18.200900 1849924 cri.go:89] found id: ""
	I1124 09:57:18.200913 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.200920 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:18.200927 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:18.200938 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:18.266434 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:18.266452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:18.281611 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:18.281627 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:18.347510 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:18.347523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:18.347536 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:18.435234 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:18.435254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:20.973569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:20.984347 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:20.984418 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:21.011115 1849924 cri.go:89] found id: ""
	I1124 09:57:21.011130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.011137 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:21.011142 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:21.011204 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:21.041877 1849924 cri.go:89] found id: ""
	I1124 09:57:21.041891 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.041899 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:21.041904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:21.041963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:21.067204 1849924 cri.go:89] found id: ""
	I1124 09:57:21.067217 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.067224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:21.067229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:21.067288 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:21.096444 1849924 cri.go:89] found id: ""
	I1124 09:57:21.096458 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.096464 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:21.096470 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:21.096526 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:21.122011 1849924 cri.go:89] found id: ""
	I1124 09:57:21.122025 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.122033 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:21.122038 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:21.122098 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:21.150504 1849924 cri.go:89] found id: ""
	I1124 09:57:21.150518 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.150525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:21.150530 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:21.150601 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:21.179560 1849924 cri.go:89] found id: ""
	I1124 09:57:21.179573 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.179579 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:21.179587 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:21.179597 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:21.263112 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:21.263134 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:21.291875 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:21.291891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:21.358120 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:21.358139 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:21.373381 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:21.373401 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:21.437277 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:23.938404 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:23.948703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:23.948770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:23.975638 1849924 cri.go:89] found id: ""
	I1124 09:57:23.975653 1849924 logs.go:282] 0 containers: []
	W1124 09:57:23.975660 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:23.975666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:23.975797 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:24.003099 1849924 cri.go:89] found id: ""
	I1124 09:57:24.003114 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.003122 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:24.003127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:24.003195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:24.031320 1849924 cri.go:89] found id: ""
	I1124 09:57:24.031333 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.031340 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:24.031345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:24.031412 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:24.057464 1849924 cri.go:89] found id: ""
	I1124 09:57:24.057479 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.057486 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:24.057491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:24.057560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:24.083571 1849924 cri.go:89] found id: ""
	I1124 09:57:24.083586 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.083593 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:24.083598 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:24.083656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:24.109710 1849924 cri.go:89] found id: ""
	I1124 09:57:24.109724 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.109732 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:24.109737 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:24.109810 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:24.134957 1849924 cri.go:89] found id: ""
	I1124 09:57:24.134971 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.134978 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:24.134985 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:24.134995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:24.206698 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:24.206725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:24.221977 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:24.221995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:24.287450 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:24.287461 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:24.287474 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:24.364870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:24.364890 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:26.899825 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:26.911192 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:26.911260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:26.937341 1849924 cri.go:89] found id: ""
	I1124 09:57:26.937355 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.937361 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:26.937367 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:26.937429 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:26.966037 1849924 cri.go:89] found id: ""
	I1124 09:57:26.966050 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.966057 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:26.966062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:26.966119 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:26.994487 1849924 cri.go:89] found id: ""
	I1124 09:57:26.994501 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.994508 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:26.994514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:26.994572 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:27.024331 1849924 cri.go:89] found id: ""
	I1124 09:57:27.024345 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.024351 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:27.024357 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:27.024414 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:27.051922 1849924 cri.go:89] found id: ""
	I1124 09:57:27.051936 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.051943 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:27.051949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:27.052007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:27.079084 1849924 cri.go:89] found id: ""
	I1124 09:57:27.079097 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.079104 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:27.079110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:27.079166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:27.105333 1849924 cri.go:89] found id: ""
	I1124 09:57:27.105346 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.105362 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:27.105371 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:27.105399 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:27.136135 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:27.136151 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:27.202777 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:27.202797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:27.218147 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:27.218169 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:27.287094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:27.287105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:27.287116 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:29.863883 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:29.874162 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:29.874270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:29.899809 1849924 cri.go:89] found id: ""
	I1124 09:57:29.899825 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.899833 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:29.899839 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:29.899897 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:29.925268 1849924 cri.go:89] found id: ""
	I1124 09:57:29.925282 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.925289 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:29.925295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:29.925355 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:29.953756 1849924 cri.go:89] found id: ""
	I1124 09:57:29.953770 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.953778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:29.953783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:29.953844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:29.979723 1849924 cri.go:89] found id: ""
	I1124 09:57:29.979737 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.979744 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:29.979750 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:29.979809 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:30.029207 1849924 cri.go:89] found id: ""
	I1124 09:57:30.029223 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.029231 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:30.029237 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:30.029307 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:30.086347 1849924 cri.go:89] found id: ""
	I1124 09:57:30.086364 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.086374 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:30.086381 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:30.086453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:30.117385 1849924 cri.go:89] found id: ""
	I1124 09:57:30.117412 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.117420 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:30.117429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:30.117442 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:30.134069 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:30.134089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:30.200106 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:30.200116 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:30.200131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:30.277714 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:30.277734 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:30.306530 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:30.306548 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:32.873889 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:32.884169 1849924 kubeadm.go:602] duration metric: took 4m3.946947382s to restartPrimaryControlPlane
	W1124 09:57:32.884229 1849924 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 09:57:32.884313 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 09:57:33.294612 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:57:33.307085 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:57:33.314867 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:57:33.314936 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:57:33.322582 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:57:33.322593 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 09:57:33.322667 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:57:33.330196 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:57:33.330260 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:57:33.337917 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:57:33.345410 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:57:33.345471 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:57:33.352741 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.360084 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:57:33.360141 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.367359 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:57:33.374680 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:57:33.374740 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:57:33.381720 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:57:33.421475 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:57:33.421672 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:57:33.492568 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:57:33.492631 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:57:33.492668 1849924 kubeadm.go:319] OS: Linux
	I1124 09:57:33.492712 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:57:33.492759 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:57:33.492805 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:57:33.492852 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:57:33.492898 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:57:33.492945 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:57:33.492989 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:57:33.493036 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:57:33.493080 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:57:33.559811 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:57:33.559935 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:57:33.560031 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:57:33.569641 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:57:33.572593 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 09:57:33.572694 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:57:33.572778 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:57:33.572897 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:57:33.572970 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:57:33.573053 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:57:33.573134 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:57:33.573209 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:57:33.573281 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:57:33.573362 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:57:33.573444 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:57:33.573489 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:57:33.573554 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:57:34.404229 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:57:34.574070 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:57:34.974228 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:57:35.133185 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:57:35.260833 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:57:35.261355 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:57:35.265684 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:57:35.269119 1849924 out.go:252]   - Booting up control plane ...
	I1124 09:57:35.269213 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:57:35.269289 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:57:35.269807 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:57:35.284618 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:57:35.284910 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:57:35.293324 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:57:35.293620 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:57:35.293661 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:57:35.424973 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:57:35.425087 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:01:35.425195 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000242606s
	I1124 10:01:35.425226 1849924 kubeadm.go:319] 
	I1124 10:01:35.425316 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:01:35.425374 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:01:35.425488 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:01:35.425495 1849924 kubeadm.go:319] 
	I1124 10:01:35.425617 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:01:35.425655 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:01:35.425685 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:01:35.425690 1849924 kubeadm.go:319] 
	I1124 10:01:35.429378 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:01:35.429792 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:01:35.429899 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:01:35.430134 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:01:35.430138 1849924 kubeadm.go:319] 
	I1124 10:01:35.430206 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1124 10:01:35.430308 1849924 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242606s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1124 10:01:35.430396 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 10:01:35.837421 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:01:35.850299 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:01:35.850356 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:01:35.858169 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:01:35.858180 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 10:01:35.858230 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 10:01:35.866400 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:01:35.866456 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:01:35.873856 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 10:01:35.881958 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:01:35.882015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:01:35.889339 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.896920 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:01:35.896977 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.904670 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 10:01:35.912117 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:01:35.912171 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:01:35.919741 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:01:35.956259 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 10:01:35.956313 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:01:36.031052 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:01:36.031118 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:01:36.031152 1849924 kubeadm.go:319] OS: Linux
	I1124 10:01:36.031196 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:01:36.031243 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:01:36.031289 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:01:36.031336 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:01:36.031383 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:01:36.031430 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:01:36.031474 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:01:36.031521 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:01:36.031566 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:01:36.099190 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:01:36.099321 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:01:36.099441 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 10:01:36.106857 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:01:36.112186 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 10:01:36.112274 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:01:36.112337 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:01:36.112413 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 10:01:36.112473 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 10:01:36.112542 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 10:01:36.112594 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 10:01:36.112656 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 10:01:36.112719 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 10:01:36.112792 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 10:01:36.112863 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 10:01:36.112900 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 10:01:36.112954 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:01:36.197295 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:01:36.531352 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 10:01:36.984185 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:01:37.290064 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:01:37.558441 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:01:37.559017 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:01:37.561758 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:01:37.564997 1849924 out.go:252]   - Booting up control plane ...
	I1124 10:01:37.565117 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:01:37.565200 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:01:37.566811 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:01:37.581952 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:01:37.582056 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 10:01:37.589882 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 10:01:37.590273 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:01:37.590483 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:01:37.733586 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 10:01:37.733692 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:05:37.728742 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000440097s
	I1124 10:05:37.728760 1849924 kubeadm.go:319] 
	I1124 10:05:37.729148 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:05:37.729217 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:05:37.729548 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:05:37.729554 1849924 kubeadm.go:319] 
	I1124 10:05:37.729744 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:05:37.729799 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:05:37.729853 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:05:37.729860 1849924 kubeadm.go:319] 
	I1124 10:05:37.734894 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:05:37.735345 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:05:37.735452 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:05:37.735693 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:05:37.735697 1849924 kubeadm.go:319] 
	I1124 10:05:37.735773 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 10:05:37.735829 1849924 kubeadm.go:403] duration metric: took 12m8.833752588s to StartCluster
	I1124 10:05:37.735872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:05:37.735930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:05:37.769053 1849924 cri.go:89] found id: ""
	I1124 10:05:37.769070 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.769076 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:05:37.769083 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:05:37.769166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:05:37.796753 1849924 cri.go:89] found id: ""
	I1124 10:05:37.796767 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.796774 1849924 logs.go:284] No container was found matching "etcd"
	I1124 10:05:37.796780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:05:37.796839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:05:37.822456 1849924 cri.go:89] found id: ""
	I1124 10:05:37.822470 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.822487 1849924 logs.go:284] No container was found matching "coredns"
	I1124 10:05:37.822492 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:05:37.822556 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:05:37.847572 1849924 cri.go:89] found id: ""
	I1124 10:05:37.847587 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.847594 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:05:37.847601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:05:37.847660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:05:37.874600 1849924 cri.go:89] found id: ""
	I1124 10:05:37.874614 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.874621 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:05:37.874630 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:05:37.874694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:05:37.899198 1849924 cri.go:89] found id: ""
	I1124 10:05:37.899212 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.899220 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:05:37.899226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:05:37.899286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:05:37.927492 1849924 cri.go:89] found id: ""
	I1124 10:05:37.927506 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.927513 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 10:05:37.927521 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 10:05:37.927531 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:05:37.996934 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 10:05:37.996954 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:05:38.018248 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:05:38.018265 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:05:38.095385 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:05:38.095401 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:05:38.095411 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:05:38.170993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 10:05:38.171016 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:05:38.204954 1849924 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 10:05:38.205004 1849924 out.go:285] * 
	W1124 10:05:38.205075 1849924 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.205091 1849924 out.go:285] * 
	W1124 10:05:38.207567 1849924 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:05:38.212617 1849924 out.go:203] 
	W1124 10:05:38.216450 1849924 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.216497 1849924 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 10:05:38.216516 1849924 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 10:05:38.219595 1849924 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338143556Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338188882Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338238203Z" level=info msg="Create NRI interface"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338350467Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338358927Z" level=info msg="runtime interface created"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338371744Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338378472Z" level=info msg="runtime interface starting up..."
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338384536Z" level=info msg="starting plugins..."
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338397081Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338466046Z" level=info msg="No systemd watchdog enabled"
	Nov 24 09:53:27 functional-373432 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.563306518Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=16b574c8-5f01-4b5f-b4c1-033ff8df7e69 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.564186603Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=c16d1184-1db0-41cd-b079-b58f2a21c360 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.564711746Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dcdd2354-d66a-4ea6-b097-17376749f631 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.56539822Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3da7e1e-5602-4f94-87aa-f42cce3f944e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.565983081Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=5aac8310-fbf3-4ab4-abba-3add8b26d6c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.566558752Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9977cb6a-a164-4bf3-8414-583100475093 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.567059862Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5734ab5d-327c-48f0-9238-94a4932df1b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.102671605Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=652a2275-3cb5-4895-9bc9-26b562399a5a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.103518123Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=705a5d4b-cd71-4163-b52e-bdb52326e8e8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.104114725Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=26eba83c-7b31-451a-890a-d51786be660e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.104602994Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d922ec08-bcda-413d-8143-5c97b1367b6e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.10506595Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=470fbbd3-e46c-4376-b51e-18b84b192ec6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.105536807Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c84d94ac-c66d-4a84-b9e9-fb5342a05f00 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.105962946Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c833fe93-752f-447a-94cf-5fbf6c21285a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:41.822335   22128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:41.823012   22128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:41.824554   22128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:41.825036   22128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:41.826691   22128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:05:41 up  8:48,  0 user,  load average: 0.04, 0.17, 0.36
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:05:39 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:40 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Nov 24 10:05:40 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:40 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:40 functional-373432 kubelet[22002]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:40 functional-373432 kubelet[22002]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:40 functional-373432 kubelet[22002]: E1124 10:05:40.283384   22002 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:40 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:40 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:40 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Nov 24 10:05:40 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:40 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:41 functional-373432 kubelet[22038]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:41 functional-373432 kubelet[22038]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:41 functional-373432 kubelet[22038]: E1124 10:05:41.013541   22038 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:41 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:41 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:41 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Nov 24 10:05:41 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:41 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:41 functional-373432 kubelet[22115]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:41 functional-373432 kubelet[22115]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:41 functional-373432 kubelet[22115]: E1124 10:05:41.775719   22115 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:41 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:41 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (379.712114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-373432 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-373432 apply -f testdata/invalidsvc.yaml: exit status 1 (75.08434ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-373432 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-373432 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-373432 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-373432 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-373432 --alsologtostderr -v=1] stderr:
I1124 10:08:13.645161 1868845 out.go:360] Setting OutFile to fd 1 ...
I1124 10:08:13.645318 1868845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:13.645331 1868845 out.go:374] Setting ErrFile to fd 2...
I1124 10:08:13.645337 1868845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:13.645595 1868845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:08:13.645852 1868845 mustload.go:66] Loading cluster: functional-373432
I1124 10:08:13.646285 1868845 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:13.646775 1868845 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:08:13.662655 1868845 host.go:66] Checking if "functional-373432" exists ...
I1124 10:08:13.662996 1868845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1124 10:08:13.710338 1868845 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:13.701416117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1124 10:08:13.710465 1868845 api_server.go:166] Checking apiserver status ...
I1124 10:08:13.710530 1868845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1124 10:08:13.710590 1868845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:08:13.726847 1868845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
W1124 10:08:13.830457 1868845 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1124 10:08:13.833676 1868845 out.go:179] * The control-plane node functional-373432 apiserver is not running: (state=Stopped)
I1124 10:08:13.836614 1868845 out.go:179]   To start a cluster, run: "minikube start -p functional-373432"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (330.389386ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-373432 service --namespace=default --https --url hello-node                                                                              │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ service   │ functional-373432 service hello-node --url --format={{.IP}}                                                                                         │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ service   │ functional-373432 service hello-node --url                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount     │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001:/mount-9p --alsologtostderr -v=1              │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh       │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh       │ functional-373432 ssh -- ls -la /mount-9p                                                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh       │ functional-373432 ssh cat /mount-9p/test-1763978885329930944                                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh       │ functional-373432 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh       │ functional-373432 ssh sudo umount -f /mount-9p                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ mount     │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2629282086/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh       │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh       │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh       │ functional-373432 ssh -- ls -la /mount-9p                                                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh       │ functional-373432 ssh sudo umount -f /mount-9p                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount     │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount1 --alsologtostderr -v=1                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount     │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount2 --alsologtostderr -v=1                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount     │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount3 --alsologtostderr -v=1                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh       │ functional-373432 ssh findmnt -T /mount1                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh       │ functional-373432 ssh findmnt -T /mount2                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh       │ functional-373432 ssh findmnt -T /mount3                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ mount     │ -p functional-373432 --kill=true                                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ start     │ -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ start     │ -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ start     │ -p functional-373432 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-373432 --alsologtostderr -v=1                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 10:08:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 10:08:13.424963 1868773 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:08:13.425165 1868773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.425197 1868773 out.go:374] Setting ErrFile to fd 2...
	I1124 10:08:13.425218 1868773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.425512 1868773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:08:13.425913 1868773 out.go:368] Setting JSON to false
	I1124 10:08:13.426791 1868773 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31844,"bootTime":1763947050,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:08:13.426892 1868773 start.go:143] virtualization:  
	I1124 10:08:13.430180 1868773 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 10:08:13.433949 1868773 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:08:13.434016 1868773 notify.go:221] Checking for updates...
	I1124 10:08:13.440162 1868773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:08:13.442942 1868773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:08:13.445792 1868773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:08:13.448556 1868773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:08:13.451408 1868773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:08:13.454901 1868773 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:08:13.455498 1868773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:08:13.483789 1868773 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:08:13.483893 1868773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:08:13.534092 1868773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:13.524576717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:08:13.534192 1868773 docker.go:319] overlay module found
	I1124 10:08:13.537292 1868773 out.go:179] * Using the docker driver based on existing profile
	I1124 10:08:13.540209 1868773 start.go:309] selected driver: docker
	I1124 10:08:13.540227 1868773 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:08:13.540322 1868773 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:08:13.540435 1868773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:08:13.594036 1868773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:13.585629488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:08:13.594447 1868773 cni.go:84] Creating CNI manager for ""
	I1124 10:08:13.594516 1868773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:08:13.594572 1868773 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:08:13.597775 1868773 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.571892719Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=aef09199-0d9c-4fcd-a86e-4644b84003d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601271581Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601433691Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.60148682Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.673335433Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=df47687b-4b6a-4acb-8d1e-f46521441883 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.702968936Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703135847Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703183371Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.732961212Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.733138962Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.7331819Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.260656424Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=d07a4d73-f74e-45cd-9c4d-fd518a9e69a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301046166Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301216031Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.3012543Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.339616029Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.33997436Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.340022221Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.333274376Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=0d853bf6-0cff-41f5-a62e-2b21fedcbf72 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366217435Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366382032Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366430164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.391919753Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392065551Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392106118Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:08:14.904896   24912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:14.905687   24912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:14.907222   24912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:14.907755   24912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:14.909498   24912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:08:14 up  8:50,  0 user,  load average: 1.17, 0.51, 0.46
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:08:12 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:08:13 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1168.
	Nov 24 10:08:13 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:13 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:13 functional-373432 kubelet[24792]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:13 functional-373432 kubelet[24792]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:13 functional-373432 kubelet[24792]: E1124 10:08:13.272092   24792 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:08:13 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:08:13 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:08:13 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1169.
	Nov 24 10:08:13 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:13 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:14 functional-373432 kubelet[24806]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:14 functional-373432 kubelet[24806]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:14 functional-373432 kubelet[24806]: E1124 10:08:14.037722   24806 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:08:14 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:08:14 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:08:14 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1170.
	Nov 24 10:08:14 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:14 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:14 functional-373432 kubelet[24873]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:14 functional-373432 kubelet[24873]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:14 functional-373432 kubelet[24873]: E1124 10:08:14.766531   24873 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:08:14 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:08:14 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (335.676548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 status: exit status 2 (351.476181ms)

                                                
                                                
-- stdout --
	functional-373432
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-373432 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (301.196582ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-373432 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 status -o json: exit status 2 (306.628488ms)

                                                
                                                
-- stdout --
	{"Name":"functional-373432","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-373432 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (326.565276ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-373432 addons list                                                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:07 UTC │ 24 Nov 25 10:07 UTC │
	│ addons  │ functional-373432 addons list -o json                                                                                                               │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:07 UTC │ 24 Nov 25 10:07 UTC │
	│ service │ functional-373432 service list                                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ service │ functional-373432 service list -o json                                                                                                              │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ service │ functional-373432 service --namespace=default --https --url hello-node                                                                              │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ service │ functional-373432 service hello-node --url --format={{.IP}}                                                                                         │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ service │ functional-373432 service hello-node --url                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount   │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001:/mount-9p --alsologtostderr -v=1              │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh     │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh     │ functional-373432 ssh -- ls -la /mount-9p                                                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh     │ functional-373432 ssh cat /mount-9p/test-1763978885329930944                                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh     │ functional-373432 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh     │ functional-373432 ssh sudo umount -f /mount-9p                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ mount   │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2629282086/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh     │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh     │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh     │ functional-373432 ssh -- ls -la /mount-9p                                                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh     │ functional-373432 ssh sudo umount -f /mount-9p                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount   │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount1 --alsologtostderr -v=1                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount   │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount2 --alsologtostderr -v=1                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount   │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount3 --alsologtostderr -v=1                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh     │ functional-373432 ssh findmnt -T /mount1                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh     │ functional-373432 ssh findmnt -T /mount2                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh     │ functional-373432 ssh findmnt -T /mount3                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ mount   │ -p functional-373432 --kill=true                                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:53:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:53:23.394373 1849924 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:53:23.394473 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394476 1849924 out.go:374] Setting ErrFile to fd 2...
	I1124 09:53:23.394480 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394868 1849924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:53:23.395314 1849924 out.go:368] Setting JSON to false
	I1124 09:53:23.396438 1849924 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30954,"bootTime":1763947050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:53:23.396523 1849924 start.go:143] virtualization:  
	I1124 09:53:23.399850 1849924 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:53:23.403618 1849924 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:53:23.403698 1849924 notify.go:221] Checking for updates...
	I1124 09:53:23.409546 1849924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:53:23.412497 1849924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:53:23.415264 1849924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:53:23.418109 1849924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:53:23.420908 1849924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:53:23.424158 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:23.424263 1849924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:53:23.449398 1849924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:53:23.449524 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.505939 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.496540271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.506033 1849924 docker.go:319] overlay module found
	I1124 09:53:23.509224 1849924 out.go:179] * Using the docker driver based on existing profile
	I1124 09:53:23.512245 1849924 start.go:309] selected driver: docker
	I1124 09:53:23.512255 1849924 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.512340 1849924 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:53:23.512454 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.568317 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.558792888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.568738 1849924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:53:23.568763 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:23.568821 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:23.568862 1849924 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.571988 1849924 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:53:23.574929 1849924 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:53:23.577959 1849924 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:53:23.580671 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:23.580735 1849924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:53:23.600479 1849924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:53:23.600490 1849924 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:53:23.634350 1849924 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:53:24.054820 1849924 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:53:24.054990 1849924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:53:24.055122 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.055240 1849924 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:53:24.055269 1849924 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.055313 1849924 start.go:364] duration metric: took 27.192µs to acquireMachinesLock for "functional-373432"
	I1124 09:53:24.055327 1849924 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:53:24.055331 1849924 fix.go:54] fixHost starting: 
	I1124 09:53:24.055580 1849924 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:53:24.072844 1849924 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:53:24.072865 1849924 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:53:24.076050 1849924 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:53:24.076079 1849924 machine.go:94] provisionDockerMachine start ...
	I1124 09:53:24.076162 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.100870 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.101221 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.101228 1849924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:53:24.232623 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.252893 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.252907 1849924 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:53:24.252988 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.280057 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.280362 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.280376 1849924 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:53:24.402975 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.467980 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.468079 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.499770 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.500067 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.500084 1849924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:53:24.556663 1849924 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556759 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:53:24.556767 1849924 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.133µs
	I1124 09:53:24.556774 1849924 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:53:24.556785 1849924 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556814 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:53:24.556818 1849924 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.266µs
	I1124 09:53:24.556823 1849924 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556832 1849924 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556867 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:53:24.556871 1849924 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 39.738µs
	I1124 09:53:24.556876 1849924 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556884 1849924 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556911 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:53:24.556915 1849924 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.655µs
	I1124 09:53:24.556920 1849924 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556934 1849924 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556959 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:53:24.556963 1849924 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 35.948µs
	I1124 09:53:24.556967 1849924 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556975 1849924 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556999 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:53:24.557011 1849924 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.226µs
	I1124 09:53:24.557015 1849924 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:53:24.557023 1849924 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557048 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:53:24.557051 1849924 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 29.202µs
	I1124 09:53:24.557056 1849924 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:53:24.557065 1849924 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557089 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:53:24.557093 1849924 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.258µs
	I1124 09:53:24.557097 1849924 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:53:24.557129 1849924 cache.go:87] Successfully saved all images to host disk.
	I1124 09:53:24.653937 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:53:24.653952 1849924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:53:24.653984 1849924 ubuntu.go:190] setting up certificates
	I1124 09:53:24.653993 1849924 provision.go:84] configureAuth start
	I1124 09:53:24.654058 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:24.671316 1849924 provision.go:143] copyHostCerts
	I1124 09:53:24.671391 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:53:24.671399 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:53:24.671473 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:53:24.671573 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:53:24.671577 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:53:24.671611 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:53:24.671659 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:53:24.671662 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:53:24.671684 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:53:24.671727 1849924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:53:25.074688 1849924 provision.go:177] copyRemoteCerts
	I1124 09:53:25.074752 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:53:25.074789 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.095886 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.200905 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:53:25.221330 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:53:25.243399 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:53:25.263746 1849924 provision.go:87] duration metric: took 609.720286ms to configureAuth
	I1124 09:53:25.263762 1849924 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:53:25.263945 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:25.264045 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.283450 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:25.283754 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:25.283770 1849924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:53:25.632249 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:53:25.632261 1849924 machine.go:97] duration metric: took 1.556176004s to provisionDockerMachine
	I1124 09:53:25.632272 1849924 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:53:25.632283 1849924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:53:25.632368 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:53:25.632405 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.650974 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.756910 1849924 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:53:25.760285 1849924 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:53:25.760302 1849924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:53:25.760312 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:53:25.760370 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:53:25.760445 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:53:25.760518 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:53:25.760561 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:53:25.767953 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:25.785397 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:53:25.802531 1849924 start.go:296] duration metric: took 170.24573ms for postStartSetup
	I1124 09:53:25.802613 1849924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:53:25.802665 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.819451 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.922232 1849924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:53:25.926996 1849924 fix.go:56] duration metric: took 1.871657348s for fixHost
	I1124 09:53:25.927011 1849924 start.go:83] releasing machines lock for "functional-373432", held for 1.871691088s
	I1124 09:53:25.927085 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:25.943658 1849924 ssh_runner.go:195] Run: cat /version.json
	I1124 09:53:25.943696 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.943958 1849924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:53:25.944002 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.980808 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.985182 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:26.175736 1849924 ssh_runner.go:195] Run: systemctl --version
	I1124 09:53:26.181965 1849924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:53:26.217601 1849924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:53:26.221860 1849924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:53:26.221923 1849924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:53:26.229857 1849924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:53:26.229870 1849924 start.go:496] detecting cgroup driver to use...
	I1124 09:53:26.229899 1849924 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:53:26.229945 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:53:26.244830 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:53:26.257783 1849924 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:53:26.257835 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:53:26.273202 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:53:26.286089 1849924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:53:26.392939 1849924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:53:26.505658 1849924 docker.go:234] disabling docker service ...
	I1124 09:53:26.505717 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:53:26.520682 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:53:26.533901 1849924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:53:26.643565 1849924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:53:26.781643 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:53:26.794102 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:53:26.807594 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:26.964951 1849924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:53:26.965014 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.974189 1849924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:53:26.974248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.982757 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.991310 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.000248 1849924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:53:27.009837 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.019258 1849924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.028248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.037276 1849924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:53:27.045218 1849924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:53:27.052631 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:27.162722 1849924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:53:27.344834 1849924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:53:27.344893 1849924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:53:27.348791 1849924 start.go:564] Will wait 60s for crictl version
	I1124 09:53:27.348847 1849924 ssh_runner.go:195] Run: which crictl
	I1124 09:53:27.352314 1849924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:53:27.376797 1849924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:53:27.376884 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.404280 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.437171 1849924 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:53:27.439969 1849924 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:53:27.457621 1849924 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:53:27.466585 1849924 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 09:53:27.469312 1849924 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:53:27.469546 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.636904 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.787069 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.940573 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:27.940635 1849924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:53:27.974420 1849924 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:53:27.974431 1849924 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:53:27.974436 1849924 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:53:27.974527 1849924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:53:27.974612 1849924 ssh_runner.go:195] Run: crio config
	I1124 09:53:28.037679 1849924 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 09:53:28.037700 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:28.037709 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:28.037724 1849924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:53:28.037750 1849924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:53:28.037877 1849924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:53:28.037948 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:53:28.045873 1849924 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:53:28.045941 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:53:28.053444 1849924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:53:28.066325 1849924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:53:28.079790 1849924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1124 09:53:28.092701 1849924 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:53:28.096834 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:28.213078 1849924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:53:28.235943 1849924 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:53:28.235953 1849924 certs.go:195] generating shared ca certs ...
	I1124 09:53:28.235988 1849924 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:53:28.236165 1849924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:53:28.236216 1849924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:53:28.236222 1849924 certs.go:257] generating profile certs ...
	I1124 09:53:28.236320 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:53:28.236381 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:53:28.236430 1849924 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:53:28.236545 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:53:28.236581 1849924 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:53:28.236590 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:53:28.236617 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:53:28.236639 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:53:28.236676 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:53:28.236733 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:28.237452 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:53:28.267491 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:53:28.288261 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:53:28.304655 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:53:28.321607 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:53:28.339914 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:53:28.357697 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:53:28.374827 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:53:28.392170 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:53:28.410757 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:53:28.428776 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:53:28.446790 1849924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:53:28.459992 1849924 ssh_runner.go:195] Run: openssl version
	I1124 09:53:28.466084 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:53:28.474433 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478225 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478282 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.521415 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:53:28.529784 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:53:28.538178 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542108 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542164 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.583128 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:53:28.591113 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:53:28.599457 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603413 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603474 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.645543 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:53:28.653724 1849924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:53:28.657603 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:53:28.698734 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:53:28.739586 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:53:28.780289 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:53:28.820840 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:53:28.861343 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:53:28.902087 1849924 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:28.902167 1849924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:53:28.902236 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.929454 1849924 cri.go:89] found id: ""
	I1124 09:53:28.929519 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:53:28.937203 1849924 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:53:28.937213 1849924 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:53:28.937261 1849924 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:53:28.944668 1849924 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:28.945209 1849924 kubeconfig.go:125] found "functional-373432" server: "https://192.168.49.2:8441"
	I1124 09:53:28.946554 1849924 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:53:28.956044 1849924 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 09:38:48.454819060 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 09:53:28.085978644 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 09:53:28.956053 1849924 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:53:28.956064 1849924 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:53:28.956128 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.991786 1849924 cri.go:89] found id: ""
	I1124 09:53:28.991878 1849924 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:53:29.009992 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:53:29.018335 1849924 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 09:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 09:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Nov 24 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 24 09:42 /etc/kubernetes/scheduler.conf
	
	I1124 09:53:29.018393 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:53:29.026350 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:53:29.034215 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.034271 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:53:29.042061 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.049959 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.050015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.057477 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:53:29.065397 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.065453 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:53:29.072838 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:53:29.080812 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:29.126682 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:30.915283 1849924 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.788534288s)
	I1124 09:53:30.915375 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.124806 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.187302 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.234732 1849924 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:53:31.234802 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:31.735292 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.235922 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.735385 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.235894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.734984 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.235509 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.735644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.235724 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.235151 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.734994 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.235505 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.734925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.235891 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.235854 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.235929 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.734921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.234991 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.235015 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.734874 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.235403 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.734996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.235058 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.735496 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.235113 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.735894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.234930 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.735636 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.234914 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.734875 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.235656 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.735578 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.235469 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.735823 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.235926 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.235524 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.735679 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.235407 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.735614 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.235868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.734868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.235806 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.735801 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.235315 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.735919 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.735842 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.235491 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.235122 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.735029 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.235002 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.735695 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.236092 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.735024 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.235917 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.735341 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.235291 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.735026 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.235183 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.735898 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.235334 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.234896 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.735246 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.235531 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.235579 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.735599 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.234953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.734946 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.235705 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.735908 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.234909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.735831 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.235563 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.735909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.234992 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.735855 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.234936 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.734993 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.235585 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.235013 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.735371 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.235016 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.735593 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.735653 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.235793 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.734939 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.235317 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.735001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.235075 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.234969 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.735715 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.234859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.735010 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.235545 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.735305 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.235127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.734989 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.235601 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.734933 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.234986 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.735250 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.235727 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.734976 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.235644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.735675 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.735127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:31.234921 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:31.235007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:31.266239 1849924 cri.go:89] found id: ""
	I1124 09:54:31.266252 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.266259 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:31.266265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:31.266323 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:31.294586 1849924 cri.go:89] found id: ""
	I1124 09:54:31.294608 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.294616 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:31.294623 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:31.294694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:31.322061 1849924 cri.go:89] found id: ""
	I1124 09:54:31.322076 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.322083 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:31.322088 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:31.322159 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:31.349139 1849924 cri.go:89] found id: ""
	I1124 09:54:31.349154 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.349161 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:31.349167 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:31.349230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:31.379824 1849924 cri.go:89] found id: ""
	I1124 09:54:31.379838 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.379845 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:31.379850 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:31.379915 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:31.407206 1849924 cri.go:89] found id: ""
	I1124 09:54:31.407220 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.407228 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:31.407233 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:31.407296 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:31.435102 1849924 cri.go:89] found id: ""
	I1124 09:54:31.435117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.435123 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:31.435132 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:31.435143 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:31.504759 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:31.504779 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:31.520567 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:31.520584 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:31.587634 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:31.587666 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:31.587680 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:31.665843 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:31.665864 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.199426 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:34.210826 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:34.210886 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:34.249730 1849924 cri.go:89] found id: ""
	I1124 09:54:34.249743 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.249769 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:34.249774 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:34.249844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:34.279157 1849924 cri.go:89] found id: ""
	I1124 09:54:34.279171 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.279178 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:34.279183 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:34.279253 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:34.305617 1849924 cri.go:89] found id: ""
	I1124 09:54:34.305631 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.305655 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:34.305661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:34.305730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:34.331221 1849924 cri.go:89] found id: ""
	I1124 09:54:34.331235 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.331243 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:34.331249 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:34.331309 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:34.357361 1849924 cri.go:89] found id: ""
	I1124 09:54:34.357374 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.357381 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:34.357387 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:34.357447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:34.382790 1849924 cri.go:89] found id: ""
	I1124 09:54:34.382805 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.382812 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:34.382817 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:34.382882 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:34.408622 1849924 cri.go:89] found id: ""
	I1124 09:54:34.408635 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.408653 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:34.408661 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:34.408673 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:34.473355 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:34.473365 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:34.473376 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:34.560903 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:34.560924 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.589722 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:34.589738 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:34.659382 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:34.659407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.175501 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:37.187020 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:37.187082 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:37.215497 1849924 cri.go:89] found id: ""
	I1124 09:54:37.215511 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.215518 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:37.215524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:37.215584 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:37.252296 1849924 cri.go:89] found id: ""
	I1124 09:54:37.252310 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.252317 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:37.252323 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:37.252383 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:37.281216 1849924 cri.go:89] found id: ""
	I1124 09:54:37.281230 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.281237 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:37.281242 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:37.281302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:37.307335 1849924 cri.go:89] found id: ""
	I1124 09:54:37.307349 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.307356 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:37.307361 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:37.307435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:37.333186 1849924 cri.go:89] found id: ""
	I1124 09:54:37.333209 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.333217 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:37.333222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:37.333290 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:37.358046 1849924 cri.go:89] found id: ""
	I1124 09:54:37.358060 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.358068 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:37.358074 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:37.358130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:37.388252 1849924 cri.go:89] found id: ""
	I1124 09:54:37.388265 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.388273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:37.388280 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:37.388291 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:37.423715 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:37.423740 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:37.490800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:37.490819 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.506370 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:37.506387 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:37.571587 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:37.571597 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:37.571608 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.152603 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:40.164138 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:40.164210 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:40.192566 1849924 cri.go:89] found id: ""
	I1124 09:54:40.192581 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.192589 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:40.192594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:40.192677 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:40.233587 1849924 cri.go:89] found id: ""
	I1124 09:54:40.233616 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.233623 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:40.233628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:40.233702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:40.268152 1849924 cri.go:89] found id: ""
	I1124 09:54:40.268166 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.268173 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:40.268178 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:40.268258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:40.297572 1849924 cri.go:89] found id: ""
	I1124 09:54:40.297586 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.297593 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:40.297605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:40.297666 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:40.328480 1849924 cri.go:89] found id: ""
	I1124 09:54:40.328502 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.328511 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:40.328517 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:40.328583 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:40.354088 1849924 cri.go:89] found id: ""
	I1124 09:54:40.354102 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.354108 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:40.354114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:40.354172 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:40.384758 1849924 cri.go:89] found id: ""
	I1124 09:54:40.384772 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.384779 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:40.384786 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:40.384797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:40.452137 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:40.452157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:40.467741 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:40.467757 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:40.535224 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:40.535235 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:40.535246 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.615981 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:40.616005 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:43.148076 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:43.158106 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:43.158169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:43.182985 1849924 cri.go:89] found id: ""
	I1124 09:54:43.182999 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.183006 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:43.183012 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:43.183068 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:43.215806 1849924 cri.go:89] found id: ""
	I1124 09:54:43.215820 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.215837 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:43.215844 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:43.215903 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:43.244278 1849924 cri.go:89] found id: ""
	I1124 09:54:43.244301 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.244309 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:43.244314 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:43.244385 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:43.272908 1849924 cri.go:89] found id: ""
	I1124 09:54:43.272931 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.272938 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:43.272949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:43.273029 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:43.297907 1849924 cri.go:89] found id: ""
	I1124 09:54:43.297921 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.297927 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:43.297933 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:43.298008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:43.330376 1849924 cri.go:89] found id: ""
	I1124 09:54:43.330391 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.330397 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:43.330403 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:43.330459 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:43.359850 1849924 cri.go:89] found id: ""
	I1124 09:54:43.359864 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.359871 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:43.359879 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:43.359898 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:43.426992 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:43.427012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:43.441799 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:43.441816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:43.504072 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:43.504082 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:43.504093 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:43.585362 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:43.585390 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.114191 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:46.124223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:46.124285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:46.151013 1849924 cri.go:89] found id: ""
	I1124 09:54:46.151027 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.151034 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:46.151039 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:46.151096 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:46.177170 1849924 cri.go:89] found id: ""
	I1124 09:54:46.177184 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.177191 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:46.177196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:46.177258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:46.205800 1849924 cri.go:89] found id: ""
	I1124 09:54:46.205814 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.205822 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:46.205828 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:46.205893 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:46.239665 1849924 cri.go:89] found id: ""
	I1124 09:54:46.239689 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.239697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:46.239702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:46.239782 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:46.274455 1849924 cri.go:89] found id: ""
	I1124 09:54:46.274480 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.274488 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:46.274494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:46.274574 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:46.300659 1849924 cri.go:89] found id: ""
	I1124 09:54:46.300673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.300680 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:46.300686 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:46.300760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:46.326694 1849924 cri.go:89] found id: ""
	I1124 09:54:46.326708 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.326715 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:46.326723 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:46.326735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:46.389430 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:46.389441 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:46.389452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:46.467187 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:46.467207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.499873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:46.499889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:46.574600 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:46.574626 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.092671 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:49.102878 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:49.102942 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:49.130409 1849924 cri.go:89] found id: ""
	I1124 09:54:49.130431 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.130439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:49.130445 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:49.130508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:49.156861 1849924 cri.go:89] found id: ""
	I1124 09:54:49.156874 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.156891 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:49.156897 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:49.156964 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:49.183346 1849924 cri.go:89] found id: ""
	I1124 09:54:49.183369 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.183376 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:49.183382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:49.183442 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:49.217035 1849924 cri.go:89] found id: ""
	I1124 09:54:49.217049 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.217056 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:49.217062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:49.217146 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:49.245694 1849924 cri.go:89] found id: ""
	I1124 09:54:49.245713 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.245720 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:49.245726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:49.245891 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:49.284969 1849924 cri.go:89] found id: ""
	I1124 09:54:49.284983 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.284990 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:49.284995 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:49.285055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:49.314521 1849924 cri.go:89] found id: ""
	I1124 09:54:49.314535 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.314542 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:49.314549 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:49.314560 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:49.398958 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:49.398979 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:49.428494 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:49.428511 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:49.497701 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:49.497725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.513336 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:49.513352 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:49.581585 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.081862 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:52.092629 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:52.092692 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:52.124453 1849924 cri.go:89] found id: ""
	I1124 09:54:52.124475 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.124482 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:52.124488 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:52.124546 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:52.151758 1849924 cri.go:89] found id: ""
	I1124 09:54:52.151771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.151778 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:52.151784 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:52.151844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:52.176757 1849924 cri.go:89] found id: ""
	I1124 09:54:52.176771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.176778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:52.176783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:52.176846 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:52.201940 1849924 cri.go:89] found id: ""
	I1124 09:54:52.201954 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.201961 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:52.201967 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:52.202025 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:52.248612 1849924 cri.go:89] found id: ""
	I1124 09:54:52.248625 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.248632 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:52.248638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:52.248713 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:52.279382 1849924 cri.go:89] found id: ""
	I1124 09:54:52.279396 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.279404 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:52.279409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:52.279471 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:52.308695 1849924 cri.go:89] found id: ""
	I1124 09:54:52.308709 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.308717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:52.308724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:52.308735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:52.376027 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:52.376050 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:52.391327 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:52.391343 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:52.459367 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.459377 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:52.459389 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:52.535870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:52.535893 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:55.066284 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:55.077139 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:55.077203 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:55.105400 1849924 cri.go:89] found id: ""
	I1124 09:54:55.105498 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.105506 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:55.105512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:55.105620 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:55.136637 1849924 cri.go:89] found id: ""
	I1124 09:54:55.136651 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.136659 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:55.136664 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:55.136729 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:55.164659 1849924 cri.go:89] found id: ""
	I1124 09:54:55.164673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.164680 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:55.164685 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:55.164749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:55.190091 1849924 cri.go:89] found id: ""
	I1124 09:54:55.190117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.190124 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:55.190129 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:55.190191 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:55.224336 1849924 cri.go:89] found id: ""
	I1124 09:54:55.224351 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.224358 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:55.224363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:55.224424 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:55.259735 1849924 cri.go:89] found id: ""
	I1124 09:54:55.259748 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.259755 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:55.259761 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:55.259821 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:55.290052 1849924 cri.go:89] found id: ""
	I1124 09:54:55.290065 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.290072 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:55.290079 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:55.290090 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:55.355938 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:55.355957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:55.371501 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:55.371518 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:55.437126 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:55.437140 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:55.437152 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:55.515834 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:55.515854 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.048421 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:58.059495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:58.059560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:58.087204 1849924 cri.go:89] found id: ""
	I1124 09:54:58.087219 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.087226 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:58.087232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:58.087292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:58.118248 1849924 cri.go:89] found id: ""
	I1124 09:54:58.118262 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.118270 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:58.118276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:58.118336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:58.144878 1849924 cri.go:89] found id: ""
	I1124 09:54:58.144892 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.144899 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:58.144905 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:58.144963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:58.171781 1849924 cri.go:89] found id: ""
	I1124 09:54:58.171795 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.171814 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:58.171820 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:58.171898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:58.200885 1849924 cri.go:89] found id: ""
	I1124 09:54:58.200907 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.200915 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:58.200920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:58.200993 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:58.231674 1849924 cri.go:89] found id: ""
	I1124 09:54:58.231688 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.231695 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:58.231718 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:58.231792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:58.266664 1849924 cri.go:89] found id: ""
	I1124 09:54:58.266679 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.266686 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:58.266694 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:58.266705 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.300806 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:58.300822 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:58.367929 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:58.367949 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:58.383950 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:58.383967 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:58.449243 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:58.449254 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:58.449279 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:01.029569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:01.040150 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:01.040231 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:01.067942 1849924 cri.go:89] found id: ""
	I1124 09:55:01.067955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.067962 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:01.067968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:01.068031 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:01.095348 1849924 cri.go:89] found id: ""
	I1124 09:55:01.095362 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.095369 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:01.095375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:01.095436 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:01.125781 1849924 cri.go:89] found id: ""
	I1124 09:55:01.125795 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.125803 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:01.125808 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:01.125871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:01.153546 1849924 cri.go:89] found id: ""
	I1124 09:55:01.153561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.153568 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:01.153575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:01.153643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:01.183965 1849924 cri.go:89] found id: ""
	I1124 09:55:01.183980 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.183987 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:01.183993 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:01.184055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:01.218518 1849924 cri.go:89] found id: ""
	I1124 09:55:01.218533 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.218541 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:01.218548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:01.218628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:01.255226 1849924 cri.go:89] found id: ""
	I1124 09:55:01.255241 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.255248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:01.255255 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:01.255266 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:01.290705 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:01.290723 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:01.362275 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:01.362296 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:01.378338 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:01.378357 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:01.447338 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:01.447348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:01.447359 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.029431 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:04.039677 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:04.039753 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:04.064938 1849924 cri.go:89] found id: ""
	I1124 09:55:04.064952 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.064968 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:04.064975 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:04.065032 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:04.091065 1849924 cri.go:89] found id: ""
	I1124 09:55:04.091079 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.091087 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:04.091092 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:04.091155 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:04.119888 1849924 cri.go:89] found id: ""
	I1124 09:55:04.119902 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.119910 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:04.119915 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:04.119990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:04.145893 1849924 cri.go:89] found id: ""
	I1124 09:55:04.145907 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.145914 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:04.145920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:04.145981 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:04.172668 1849924 cri.go:89] found id: ""
	I1124 09:55:04.172682 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.172689 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:04.172695 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:04.172770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:04.199546 1849924 cri.go:89] found id: ""
	I1124 09:55:04.199559 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.199576 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:04.199582 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:04.199654 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:04.233837 1849924 cri.go:89] found id: ""
	I1124 09:55:04.233850 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.233857 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:04.233865 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:04.233875 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:04.312846 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:04.312868 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:04.328376 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:04.328393 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:04.392893 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:04.392903 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:04.392914 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.474469 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:04.474497 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.002775 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:07.014668 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:07.014734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:07.041533 1849924 cri.go:89] found id: ""
	I1124 09:55:07.041549 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.041556 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:07.041563 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:07.041628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:07.071414 1849924 cri.go:89] found id: ""
	I1124 09:55:07.071429 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.071436 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:07.071442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:07.071500 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:07.102622 1849924 cri.go:89] found id: ""
	I1124 09:55:07.102637 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.102644 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:07.102650 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:07.102708 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:07.127684 1849924 cri.go:89] found id: ""
	I1124 09:55:07.127713 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.127720 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:07.127726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:07.127792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:07.153696 1849924 cri.go:89] found id: ""
	I1124 09:55:07.153710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.153718 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:07.153724 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:07.153785 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:07.186158 1849924 cri.go:89] found id: ""
	I1124 09:55:07.186180 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.186187 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:07.186193 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:07.186252 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:07.217520 1849924 cri.go:89] found id: ""
	I1124 09:55:07.217554 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.217562 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:07.217570 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:07.217580 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.247265 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:07.247288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:07.320517 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:07.320537 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:07.336358 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:07.336373 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:07.403281 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:07.403292 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:07.403302 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:09.981463 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:09.992128 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:09.992195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:10.021174 1849924 cri.go:89] found id: ""
	I1124 09:55:10.021189 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.021197 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:10.021203 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:10.021267 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:10.049180 1849924 cri.go:89] found id: ""
	I1124 09:55:10.049194 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.049202 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:10.049207 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:10.049270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:10.078645 1849924 cri.go:89] found id: ""
	I1124 09:55:10.078660 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.078667 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:10.078673 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:10.078734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:10.106290 1849924 cri.go:89] found id: ""
	I1124 09:55:10.106304 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.106312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:10.106318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:10.106390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:10.133401 1849924 cri.go:89] found id: ""
	I1124 09:55:10.133455 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.133462 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:10.133468 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:10.133544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:10.162805 1849924 cri.go:89] found id: ""
	I1124 09:55:10.162820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.162827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:10.162833 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:10.162890 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:10.189156 1849924 cri.go:89] found id: ""
	I1124 09:55:10.189170 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.189177 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:10.189185 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:10.189206 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:10.280238 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:10.280247 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:10.280258 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:10.359007 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:10.359031 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:10.395999 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:10.396024 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:10.462661 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:10.462683 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:12.979323 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:12.989228 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:12.989300 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:13.016908 1849924 cri.go:89] found id: ""
	I1124 09:55:13.016922 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.016929 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:13.016935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:13.016998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:13.044445 1849924 cri.go:89] found id: ""
	I1124 09:55:13.044467 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.044474 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:13.044480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:13.044547 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:13.070357 1849924 cri.go:89] found id: ""
	I1124 09:55:13.070379 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.070387 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:13.070392 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:13.070461 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:13.098253 1849924 cri.go:89] found id: ""
	I1124 09:55:13.098267 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.098274 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:13.098280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:13.098339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:13.124183 1849924 cri.go:89] found id: ""
	I1124 09:55:13.124196 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.124203 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:13.124209 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:13.124269 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:13.150521 1849924 cri.go:89] found id: ""
	I1124 09:55:13.150536 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.150543 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:13.150549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:13.150619 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:13.181696 1849924 cri.go:89] found id: ""
	I1124 09:55:13.181710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.181717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:13.181724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:13.181735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:13.250758 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:13.250778 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:13.271249 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:13.271264 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:13.332213 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:13.332223 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:13.332235 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:13.409269 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:13.409293 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:15.940893 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:15.951127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:15.951201 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:15.976744 1849924 cri.go:89] found id: ""
	I1124 09:55:15.976767 1849924 logs.go:282] 0 containers: []
	W1124 09:55:15.976774 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:15.976780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:15.976848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:16.005218 1849924 cri.go:89] found id: ""
	I1124 09:55:16.005235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.005245 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:16.005251 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:16.005336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:16.036862 1849924 cri.go:89] found id: ""
	I1124 09:55:16.036888 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.036896 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:16.036902 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:16.036990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:16.063354 1849924 cri.go:89] found id: ""
	I1124 09:55:16.063369 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.063376 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:16.063382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:16.063455 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:16.092197 1849924 cri.go:89] found id: ""
	I1124 09:55:16.092211 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.092218 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:16.092224 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:16.092286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:16.117617 1849924 cri.go:89] found id: ""
	I1124 09:55:16.117631 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.117639 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:16.117644 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:16.117702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:16.143200 1849924 cri.go:89] found id: ""
	I1124 09:55:16.143214 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.143220 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:16.143228 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:16.143239 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:16.171873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:16.171889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:16.247500 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:16.247519 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:16.267064 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:16.267080 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:16.337347 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:16.337357 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:16.337368 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:18.916700 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:18.927603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:18.927697 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:18.958633 1849924 cri.go:89] found id: ""
	I1124 09:55:18.958649 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.958656 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:18.958662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:18.958725 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:18.988567 1849924 cri.go:89] found id: ""
	I1124 09:55:18.988582 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.988589 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:18.988594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:18.988665 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:19.016972 1849924 cri.go:89] found id: ""
	I1124 09:55:19.016986 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.016993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:19.016999 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:19.017058 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:19.042806 1849924 cri.go:89] found id: ""
	I1124 09:55:19.042827 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.042835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:19.042841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:19.042905 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:19.073274 1849924 cri.go:89] found id: ""
	I1124 09:55:19.073288 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.073296 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:19.073301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:19.073368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:19.099687 1849924 cri.go:89] found id: ""
	I1124 09:55:19.099701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.099708 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:19.099714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:19.099780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:19.126512 1849924 cri.go:89] found id: ""
	I1124 09:55:19.126526 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.126532 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:19.126540 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:19.126550 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:19.194410 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:19.194430 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:19.216505 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:19.216527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:19.291566 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:19.291578 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:19.291591 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:19.371192 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:19.371213 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:21.902356 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:21.912405 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:21.912468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:21.937243 1849924 cri.go:89] found id: ""
	I1124 09:55:21.937256 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.937270 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:21.937276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:21.937335 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:21.963054 1849924 cri.go:89] found id: ""
	I1124 09:55:21.963068 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.963075 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:21.963080 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:21.963136 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:21.988695 1849924 cri.go:89] found id: ""
	I1124 09:55:21.988708 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.988715 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:21.988722 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:21.988780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:22.015029 1849924 cri.go:89] found id: ""
	I1124 09:55:22.015043 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.015050 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:22.015056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:22.015117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:22.044828 1849924 cri.go:89] found id: ""
	I1124 09:55:22.044843 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.044851 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:22.044857 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:22.044919 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:22.071875 1849924 cri.go:89] found id: ""
	I1124 09:55:22.071889 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.071897 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:22.071903 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:22.071970 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:22.099237 1849924 cri.go:89] found id: ""
	I1124 09:55:22.099252 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.099259 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:22.099267 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:22.099278 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:22.170156 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:22.170176 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:22.185271 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:22.185288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:22.271963 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:22.271973 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:22.271984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:22.349426 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:22.349447 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:24.878185 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:24.888725 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:24.888800 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:24.915846 1849924 cri.go:89] found id: ""
	I1124 09:55:24.915860 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.915867 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:24.915872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:24.915931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:24.944104 1849924 cri.go:89] found id: ""
	I1124 09:55:24.944118 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.944125 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:24.944131 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:24.944196 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:24.970424 1849924 cri.go:89] found id: ""
	I1124 09:55:24.970438 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.970445 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:24.970450 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:24.970511 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:24.999941 1849924 cri.go:89] found id: ""
	I1124 09:55:24.999955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.999962 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:24.999968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:25.000027 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:25.030682 1849924 cri.go:89] found id: ""
	I1124 09:55:25.030700 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.030707 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:25.030714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:25.030788 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:25.061169 1849924 cri.go:89] found id: ""
	I1124 09:55:25.061183 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.061191 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:25.061196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:25.061262 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:25.092046 1849924 cri.go:89] found id: ""
	I1124 09:55:25.092061 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.092069 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:25.092078 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:25.092089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:25.164204 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:25.164229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:25.180461 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:25.180477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:25.270104 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:25.270114 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:25.270125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:25.349962 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:25.349985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:27.885869 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:27.895923 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:27.895990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:27.923576 1849924 cri.go:89] found id: ""
	I1124 09:55:27.923591 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.923598 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:27.923604 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:27.923660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:27.949384 1849924 cri.go:89] found id: ""
	I1124 09:55:27.949398 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.949405 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:27.949409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:27.949468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:27.974662 1849924 cri.go:89] found id: ""
	I1124 09:55:27.974675 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.974682 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:27.974687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:27.974752 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:28.000014 1849924 cri.go:89] found id: ""
	I1124 09:55:28.000028 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.000035 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:28.000041 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:28.000113 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:28.031383 1849924 cri.go:89] found id: ""
	I1124 09:55:28.031397 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.031404 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:28.031410 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:28.031468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:28.062851 1849924 cri.go:89] found id: ""
	I1124 09:55:28.062872 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.062880 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:28.062886 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:28.062965 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:28.091592 1849924 cri.go:89] found id: ""
	I1124 09:55:28.091608 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.091623 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:28.091633 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:28.091646 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:28.125018 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:28.125035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:28.190729 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:28.190751 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:28.205665 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:28.205681 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:28.285905 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:28.285917 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:28.285927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:30.864245 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:30.876164 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:30.876248 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:30.901572 1849924 cri.go:89] found id: ""
	I1124 09:55:30.901586 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.901593 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:30.901599 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:30.901659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:30.931361 1849924 cri.go:89] found id: ""
	I1124 09:55:30.931374 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.931382 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:30.931388 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:30.931449 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:30.956087 1849924 cri.go:89] found id: ""
	I1124 09:55:30.956101 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.956108 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:30.956114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:30.956174 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:30.981912 1849924 cri.go:89] found id: ""
	I1124 09:55:30.981925 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.981933 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:30.981938 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:30.982013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:31.010764 1849924 cri.go:89] found id: ""
	I1124 09:55:31.010778 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.010804 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:31.010811 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:31.010884 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:31.037094 1849924 cri.go:89] found id: ""
	I1124 09:55:31.037140 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.037146 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:31.037153 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:31.037221 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:31.064060 1849924 cri.go:89] found id: ""
	I1124 09:55:31.064075 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.064092 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:31.064100 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:31.064111 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:31.129432 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:31.129444 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:31.129455 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:31.207603 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:31.207622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:31.246019 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:31.246035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:31.313859 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:31.313882 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:33.829785 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:33.839749 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:33.839813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:33.864810 1849924 cri.go:89] found id: ""
	I1124 09:55:33.864824 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.864831 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:33.864837 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:33.864898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:33.890309 1849924 cri.go:89] found id: ""
	I1124 09:55:33.890324 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.890331 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:33.890336 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:33.890401 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:33.922386 1849924 cri.go:89] found id: ""
	I1124 09:55:33.922399 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.922406 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:33.922412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:33.922473 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:33.947199 1849924 cri.go:89] found id: ""
	I1124 09:55:33.947213 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.947220 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:33.947226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:33.947289 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:33.972195 1849924 cri.go:89] found id: ""
	I1124 09:55:33.972209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.972216 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:33.972222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:33.972294 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:33.997877 1849924 cri.go:89] found id: ""
	I1124 09:55:33.997891 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.997898 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:33.997904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:33.997961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:34.024719 1849924 cri.go:89] found id: ""
	I1124 09:55:34.024733 1849924 logs.go:282] 0 containers: []
	W1124 09:55:34.024741 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:34.024748 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:34.024769 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:34.089874 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:34.089896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:34.104839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:34.104857 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:34.171681 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:34.171691 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:34.171702 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:34.249876 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:34.249896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:36.781512 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:36.791518 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:36.791579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:36.820485 1849924 cri.go:89] found id: ""
	I1124 09:55:36.820500 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.820508 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:36.820514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:36.820589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:36.845963 1849924 cri.go:89] found id: ""
	I1124 09:55:36.845978 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.845985 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:36.845991 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:36.846062 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:36.880558 1849924 cri.go:89] found id: ""
	I1124 09:55:36.880573 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.880580 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:36.880586 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:36.880656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:36.908730 1849924 cri.go:89] found id: ""
	I1124 09:55:36.908745 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.908752 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:36.908769 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:36.908830 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:36.936618 1849924 cri.go:89] found id: ""
	I1124 09:55:36.936634 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.936646 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:36.936662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:36.936724 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:36.961091 1849924 cri.go:89] found id: ""
	I1124 09:55:36.961134 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.961142 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:36.961148 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:36.961215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:36.986263 1849924 cri.go:89] found id: ""
	I1124 09:55:36.986278 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.986285 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:36.986293 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:36.986304 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:37.061090 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:37.061120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:37.076634 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:37.076652 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:37.144407 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:37.144417 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:37.144427 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:37.223887 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:37.223907 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:39.759307 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:39.769265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:39.769325 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:39.795092 1849924 cri.go:89] found id: ""
	I1124 09:55:39.795107 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.795114 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:39.795120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:39.795180 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:39.821381 1849924 cri.go:89] found id: ""
	I1124 09:55:39.821396 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.821403 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:39.821408 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:39.821480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:39.850195 1849924 cri.go:89] found id: ""
	I1124 09:55:39.850209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.850224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:39.850232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:39.850291 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:39.875376 1849924 cri.go:89] found id: ""
	I1124 09:55:39.875391 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.875398 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:39.875404 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:39.875466 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:39.904124 1849924 cri.go:89] found id: ""
	I1124 09:55:39.904138 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.904146 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:39.904151 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:39.904222 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:39.930807 1849924 cri.go:89] found id: ""
	I1124 09:55:39.930820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.930827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:39.930832 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:39.930889 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:39.960435 1849924 cri.go:89] found id: ""
	I1124 09:55:39.960449 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.960456 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:39.960464 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:39.960475 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:40.030261 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:40.030271 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:40.030283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:40.109590 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:40.109615 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:40.143688 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:40.143704 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:40.212394 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:40.212412 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:42.734304 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:42.744432 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:42.744494 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:42.769686 1849924 cri.go:89] found id: ""
	I1124 09:55:42.769701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.769708 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:42.769714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:42.769774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:42.794368 1849924 cri.go:89] found id: ""
	I1124 09:55:42.794381 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.794388 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:42.794394 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:42.794460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:42.819036 1849924 cri.go:89] found id: ""
	I1124 09:55:42.819051 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.819058 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:42.819067 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:42.819126 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:42.845429 1849924 cri.go:89] found id: ""
	I1124 09:55:42.845444 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.845452 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:42.845457 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:42.845516 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:42.873391 1849924 cri.go:89] found id: ""
	I1124 09:55:42.873405 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.873412 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:42.873418 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:42.873483 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:42.899532 1849924 cri.go:89] found id: ""
	I1124 09:55:42.899560 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.899567 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:42.899575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:42.899642 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:42.925159 1849924 cri.go:89] found id: ""
	I1124 09:55:42.925173 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.925180 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:42.925188 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:42.925215 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:43.003079 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:43.003104 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:43.041964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:43.041990 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:43.120202 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:43.120224 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:43.143097 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:43.143191 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:43.219616 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:45.719895 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:45.730306 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:45.730370 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:45.755318 1849924 cri.go:89] found id: ""
	I1124 09:55:45.755333 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.755341 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:45.755353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:45.755413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:45.781283 1849924 cri.go:89] found id: ""
	I1124 09:55:45.781299 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.781305 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:45.781311 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:45.781369 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:45.807468 1849924 cri.go:89] found id: ""
	I1124 09:55:45.807482 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.807489 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:45.807495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:45.807554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:45.836726 1849924 cri.go:89] found id: ""
	I1124 09:55:45.836741 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.836749 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:45.836754 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:45.836813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:45.862613 1849924 cri.go:89] found id: ""
	I1124 09:55:45.862628 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.862635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:45.862641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:45.862702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:45.894972 1849924 cri.go:89] found id: ""
	I1124 09:55:45.894987 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.894994 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:45.895000 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:45.895067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:45.922194 1849924 cri.go:89] found id: ""
	I1124 09:55:45.922209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.922217 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:45.922224 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:45.922237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:45.954912 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:45.954930 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:46.021984 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:46.022004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:46.037849 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:46.037865 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:46.101460 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:46.101473 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:46.101483 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:48.688081 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:48.698194 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:48.698260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:48.724390 1849924 cri.go:89] found id: ""
	I1124 09:55:48.724404 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.724411 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:48.724416 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:48.724480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:48.749323 1849924 cri.go:89] found id: ""
	I1124 09:55:48.749337 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.749344 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:48.749350 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:48.749406 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:48.774542 1849924 cri.go:89] found id: ""
	I1124 09:55:48.774555 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.774562 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:48.774569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:48.774635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:48.799553 1849924 cri.go:89] found id: ""
	I1124 09:55:48.799568 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.799575 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:48.799580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:48.799637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:48.824768 1849924 cri.go:89] found id: ""
	I1124 09:55:48.824782 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.824789 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:48.824794 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:48.824849 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:48.853654 1849924 cri.go:89] found id: ""
	I1124 09:55:48.853668 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.853674 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:48.853680 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:48.853738 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:48.880137 1849924 cri.go:89] found id: ""
	I1124 09:55:48.880151 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.880158 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:48.880166 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:48.880178 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:48.943985 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:48.943998 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:48.944008 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:49.021387 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:49.021407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:49.054551 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:49.054566 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:49.124670 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:49.124690 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.640001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:51.650264 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:51.650326 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:51.675421 1849924 cri.go:89] found id: ""
	I1124 09:55:51.675434 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.675442 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:51.675447 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:51.675510 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:51.703552 1849924 cri.go:89] found id: ""
	I1124 09:55:51.703566 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.703573 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:51.703578 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:51.703637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:51.731457 1849924 cri.go:89] found id: ""
	I1124 09:55:51.731470 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.731477 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:51.731483 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:51.731540 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:51.757515 1849924 cri.go:89] found id: ""
	I1124 09:55:51.757529 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.757536 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:51.757541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:51.757604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:51.787493 1849924 cri.go:89] found id: ""
	I1124 09:55:51.787507 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.787514 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:51.787520 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:51.787579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:51.813153 1849924 cri.go:89] found id: ""
	I1124 09:55:51.813166 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.813173 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:51.813179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:51.813250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:51.845222 1849924 cri.go:89] found id: ""
	I1124 09:55:51.845235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.845244 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:51.845252 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:51.845272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.860214 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:51.860236 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:51.924176 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:51.924186 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:51.924196 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:52.001608 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:52.001629 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:52.037448 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:52.037466 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.609480 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:54.620161 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:54.620223 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:54.649789 1849924 cri.go:89] found id: ""
	I1124 09:55:54.649803 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.649810 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:54.649816 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:54.649879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:54.677548 1849924 cri.go:89] found id: ""
	I1124 09:55:54.677561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.677568 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:54.677573 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:54.677635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:54.707602 1849924 cri.go:89] found id: ""
	I1124 09:55:54.707616 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.707623 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:54.707628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:54.707687 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:54.737369 1849924 cri.go:89] found id: ""
	I1124 09:55:54.737382 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.737390 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:54.737396 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:54.737460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:54.764514 1849924 cri.go:89] found id: ""
	I1124 09:55:54.764528 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.764536 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:54.764541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:54.764599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:54.789898 1849924 cri.go:89] found id: ""
	I1124 09:55:54.789912 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.789920 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:54.789925 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:54.789986 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:54.815652 1849924 cri.go:89] found id: ""
	I1124 09:55:54.815665 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.815672 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:54.815681 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:54.815691 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.882879 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:54.882901 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:54.898593 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:54.898622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:54.967134 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:54.967146 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:54.967157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:55.046870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:55.046891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.578091 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:57.588580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:57.588643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:57.617411 1849924 cri.go:89] found id: ""
	I1124 09:55:57.617425 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.617432 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:57.617437 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:57.617503 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:57.642763 1849924 cri.go:89] found id: ""
	I1124 09:55:57.642777 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.642784 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:57.642789 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:57.642848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:57.668484 1849924 cri.go:89] found id: ""
	I1124 09:55:57.668499 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.668506 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:57.668512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:57.668571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:57.694643 1849924 cri.go:89] found id: ""
	I1124 09:55:57.694657 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.694664 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:57.694670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:57.694730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:57.720049 1849924 cri.go:89] found id: ""
	I1124 09:55:57.720063 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.720070 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:57.720075 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:57.720140 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:57.748016 1849924 cri.go:89] found id: ""
	I1124 09:55:57.748029 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.748036 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:57.748044 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:57.748104 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:57.774253 1849924 cri.go:89] found id: ""
	I1124 09:55:57.774266 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.774273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:57.774281 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:57.774295 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:57.789236 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:57.789253 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:57.851207 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:57.851217 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:57.851229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:57.927927 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:57.927946 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.959058 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:57.959075 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.529440 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:00.539970 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:00.540034 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:00.566556 1849924 cri.go:89] found id: ""
	I1124 09:56:00.566570 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.566583 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:00.566589 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:00.566659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:00.596278 1849924 cri.go:89] found id: ""
	I1124 09:56:00.596291 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.596298 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:00.596304 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:00.596362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:00.623580 1849924 cri.go:89] found id: ""
	I1124 09:56:00.623593 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.623600 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:00.623605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:00.623664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:00.648991 1849924 cri.go:89] found id: ""
	I1124 09:56:00.649006 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.649012 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:00.649018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:00.649078 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:00.676614 1849924 cri.go:89] found id: ""
	I1124 09:56:00.676628 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.676635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:00.676641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:00.676706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:00.701480 1849924 cri.go:89] found id: ""
	I1124 09:56:00.701502 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.701509 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:00.701516 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:00.701575 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:00.727550 1849924 cri.go:89] found id: ""
	I1124 09:56:00.727563 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.727570 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:00.727578 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:00.727589 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:00.755964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:00.755980 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.822018 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:00.822039 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:00.837252 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:00.837268 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:00.901931 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:00.901942 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:00.901957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.481859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:03.493893 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:03.493961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:03.522628 1849924 cri.go:89] found id: ""
	I1124 09:56:03.522643 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.522650 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:03.522656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:03.522716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:03.551454 1849924 cri.go:89] found id: ""
	I1124 09:56:03.551468 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.551475 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:03.551480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:03.551539 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:03.580931 1849924 cri.go:89] found id: ""
	I1124 09:56:03.580945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.580951 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:03.580957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:03.581015 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:03.607826 1849924 cri.go:89] found id: ""
	I1124 09:56:03.607840 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.607846 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:03.607852 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:03.607923 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:03.637843 1849924 cri.go:89] found id: ""
	I1124 09:56:03.637857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.637865 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:03.637870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:03.637931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:03.665156 1849924 cri.go:89] found id: ""
	I1124 09:56:03.665170 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.665176 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:03.665182 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:03.665250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:03.690810 1849924 cri.go:89] found id: ""
	I1124 09:56:03.690824 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.690831 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:03.690839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:03.690849 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:03.755803 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:03.755813 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:03.755823 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.832793 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:03.832816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:03.860351 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:03.860367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:03.930446 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:03.930465 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.445925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:06.457385 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:06.457451 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:06.490931 1849924 cri.go:89] found id: ""
	I1124 09:56:06.490944 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.490951 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:06.490956 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:06.491013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:06.529326 1849924 cri.go:89] found id: ""
	I1124 09:56:06.529340 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.529347 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:06.529353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:06.529409 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:06.554888 1849924 cri.go:89] found id: ""
	I1124 09:56:06.554914 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.554921 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:06.554926 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:06.554984 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:06.579750 1849924 cri.go:89] found id: ""
	I1124 09:56:06.579764 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.579771 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:06.579781 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:06.579839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:06.605075 1849924 cri.go:89] found id: ""
	I1124 09:56:06.605098 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.605134 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:06.605140 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:06.605207 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:06.630281 1849924 cri.go:89] found id: ""
	I1124 09:56:06.630295 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.630302 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:06.630307 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:06.630366 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:06.655406 1849924 cri.go:89] found id: ""
	I1124 09:56:06.655427 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.655435 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:06.655442 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:06.655453 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:06.722316 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:06.722335 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.737174 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:06.737190 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:06.801018 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:06.801032 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:06.801042 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:06.882225 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:06.882254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.412996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:09.423266 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:09.423332 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:09.452270 1849924 cri.go:89] found id: ""
	I1124 09:56:09.452283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.452290 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:09.452295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:09.452353 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:09.484931 1849924 cri.go:89] found id: ""
	I1124 09:56:09.484945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.484952 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:09.484957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:09.485030 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:09.526676 1849924 cri.go:89] found id: ""
	I1124 09:56:09.526689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.526696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:09.526701 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:09.526758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:09.551815 1849924 cri.go:89] found id: ""
	I1124 09:56:09.551828 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.551835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:09.551841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:09.551904 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:09.580143 1849924 cri.go:89] found id: ""
	I1124 09:56:09.580159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.580167 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:09.580173 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:09.580233 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:09.608255 1849924 cri.go:89] found id: ""
	I1124 09:56:09.608269 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.608276 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:09.608281 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:09.608338 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:09.638262 1849924 cri.go:89] found id: ""
	I1124 09:56:09.638276 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.638283 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:09.638291 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:09.638301 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:09.713707 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:09.713728 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.741202 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:09.741218 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:09.806578 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:09.806598 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:09.821839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:09.821855 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:09.888815 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.390494 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:12.400491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:12.400550 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:12.426496 1849924 cri.go:89] found id: ""
	I1124 09:56:12.426511 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.426517 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:12.426524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:12.426587 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:12.457770 1849924 cri.go:89] found id: ""
	I1124 09:56:12.457794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.457801 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:12.457807 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:12.457873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:12.489154 1849924 cri.go:89] found id: ""
	I1124 09:56:12.489167 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.489174 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:12.489179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:12.489250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:12.524997 1849924 cri.go:89] found id: ""
	I1124 09:56:12.525010 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.525018 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:12.525024 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:12.525090 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:12.550538 1849924 cri.go:89] found id: ""
	I1124 09:56:12.550561 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.550569 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:12.550574 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:12.550650 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:12.575990 1849924 cri.go:89] found id: ""
	I1124 09:56:12.576011 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.576018 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:12.576025 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:12.576095 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:12.602083 1849924 cri.go:89] found id: ""
	I1124 09:56:12.602097 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.602104 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:12.602112 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:12.602125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:12.667794 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:12.667814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:12.682815 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:12.682832 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:12.749256 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.749266 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:12.749276 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:12.823882 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:12.823902 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.353890 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:15.364319 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:15.364380 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:15.389759 1849924 cri.go:89] found id: ""
	I1124 09:56:15.389772 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.389786 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:15.389792 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:15.389850 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:15.414921 1849924 cri.go:89] found id: ""
	I1124 09:56:15.414936 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.414943 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:15.414948 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:15.415008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:15.444228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.444242 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.444249 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:15.444254 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:15.444314 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:15.476734 1849924 cri.go:89] found id: ""
	I1124 09:56:15.476747 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.476763 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:15.476768 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:15.476836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:15.507241 1849924 cri.go:89] found id: ""
	I1124 09:56:15.507254 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.507261 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:15.507275 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:15.507339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:15.544058 1849924 cri.go:89] found id: ""
	I1124 09:56:15.544081 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.544089 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:15.544094 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:15.544162 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:15.571228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.571241 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.571248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:15.571261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:15.571272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:15.646647 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:15.646667 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.674311 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:15.674326 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:15.739431 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:15.739451 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:15.754640 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:15.754662 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:15.821471 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.321745 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:18.331603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:18.331664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:18.357195 1849924 cri.go:89] found id: ""
	I1124 09:56:18.357215 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.357223 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:18.357229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:18.357292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:18.387513 1849924 cri.go:89] found id: ""
	I1124 09:56:18.387527 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.387534 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:18.387540 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:18.387600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:18.414561 1849924 cri.go:89] found id: ""
	I1124 09:56:18.414583 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.414590 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:18.414596 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:18.414670 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:18.441543 1849924 cri.go:89] found id: ""
	I1124 09:56:18.441557 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.441564 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:18.441569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:18.441627 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:18.481911 1849924 cri.go:89] found id: ""
	I1124 09:56:18.481924 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.481931 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:18.481937 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:18.481995 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:18.512577 1849924 cri.go:89] found id: ""
	I1124 09:56:18.512589 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.512596 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:18.512601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:18.512660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:18.542006 1849924 cri.go:89] found id: ""
	I1124 09:56:18.542021 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.542028 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:18.542035 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:18.542045 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:18.572217 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:18.572233 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:18.637845 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:18.637863 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:18.653892 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:18.653908 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:18.720870 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.720881 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:18.720891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.300479 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:21.310612 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:21.310716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:21.339787 1849924 cri.go:89] found id: ""
	I1124 09:56:21.339801 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.339808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:21.339819 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:21.339879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:21.364577 1849924 cri.go:89] found id: ""
	I1124 09:56:21.364601 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.364609 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:21.364615 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:21.364688 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:21.391798 1849924 cri.go:89] found id: ""
	I1124 09:56:21.391852 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.391859 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:21.391865 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:21.391939 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:21.417518 1849924 cri.go:89] found id: ""
	I1124 09:56:21.417532 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.417539 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:21.417545 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:21.417600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:21.443079 1849924 cri.go:89] found id: ""
	I1124 09:56:21.443092 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.443099 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:21.443104 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:21.443164 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:21.483649 1849924 cri.go:89] found id: ""
	I1124 09:56:21.483663 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.483685 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:21.483691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:21.483758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:21.513352 1849924 cri.go:89] found id: ""
	I1124 09:56:21.513367 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.513374 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:21.513383 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:21.513445 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:21.583074 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:21.583095 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:21.598415 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:21.598432 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:21.661326 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:21.661336 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:21.661348 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.742506 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:21.742527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:24.271763 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:24.281983 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:24.282044 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:24.313907 1849924 cri.go:89] found id: ""
	I1124 09:56:24.313920 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.313928 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:24.313934 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:24.314006 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:24.338982 1849924 cri.go:89] found id: ""
	I1124 09:56:24.338996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.339003 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:24.339009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:24.339067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:24.365195 1849924 cri.go:89] found id: ""
	I1124 09:56:24.365209 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.365216 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:24.365222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:24.365292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:24.390215 1849924 cri.go:89] found id: ""
	I1124 09:56:24.390228 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.390235 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:24.390241 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:24.390299 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:24.415458 1849924 cri.go:89] found id: ""
	I1124 09:56:24.415472 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.415479 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:24.415484 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:24.415544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:24.442483 1849924 cri.go:89] found id: ""
	I1124 09:56:24.442497 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.442504 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:24.442510 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:24.442571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:24.478898 1849924 cri.go:89] found id: ""
	I1124 09:56:24.478912 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.478919 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:24.478926 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:24.478936 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:24.559295 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:24.559320 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:24.575521 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:24.575538 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:24.643962 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:24.643974 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:24.643985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:24.721863 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:24.721883 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.252684 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:27.262544 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:27.262604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:27.288190 1849924 cri.go:89] found id: ""
	I1124 09:56:27.288203 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.288211 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:27.288216 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:27.288276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:27.315955 1849924 cri.go:89] found id: ""
	I1124 09:56:27.315975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.315983 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:27.315988 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:27.316050 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:27.341613 1849924 cri.go:89] found id: ""
	I1124 09:56:27.341626 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.341633 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:27.341639 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:27.341699 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:27.366677 1849924 cri.go:89] found id: ""
	I1124 09:56:27.366690 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.366697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:27.366703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:27.366768 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:27.392001 1849924 cri.go:89] found id: ""
	I1124 09:56:27.392015 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.392021 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:27.392027 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:27.392085 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:27.419410 1849924 cri.go:89] found id: ""
	I1124 09:56:27.419430 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.419436 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:27.419442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:27.419501 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:27.444780 1849924 cri.go:89] found id: ""
	I1124 09:56:27.444794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.444801 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:27.444809 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:27.444824 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.478836 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:27.478853 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:27.552795 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:27.552814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:27.567935 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:27.567988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:27.630838 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:27.630849 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:27.630859 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:30.212620 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:30.223248 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:30.223313 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:30.249863 1849924 cri.go:89] found id: ""
	I1124 09:56:30.249876 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.249883 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:30.249888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:30.249947 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:30.275941 1849924 cri.go:89] found id: ""
	I1124 09:56:30.275955 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.275974 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:30.275980 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:30.276053 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:30.300914 1849924 cri.go:89] found id: ""
	I1124 09:56:30.300928 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.300944 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:30.300950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:30.301016 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:30.325980 1849924 cri.go:89] found id: ""
	I1124 09:56:30.325994 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.326011 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:30.326018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:30.326089 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:30.352023 1849924 cri.go:89] found id: ""
	I1124 09:56:30.352038 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.352045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:30.352050 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:30.352121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:30.379711 1849924 cri.go:89] found id: ""
	I1124 09:56:30.379724 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.379731 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:30.379736 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:30.379801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:30.409210 1849924 cri.go:89] found id: ""
	I1124 09:56:30.409224 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.409232 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:30.409240 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:30.409251 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:30.437995 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:30.438012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:30.507429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:30.507448 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:30.525911 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:30.525927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:30.589196 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:30.589210 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:30.589220 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:33.172621 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:33.182671 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:33.182730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:33.211695 1849924 cri.go:89] found id: ""
	I1124 09:56:33.211709 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.211716 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:33.211721 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:33.211779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:33.237798 1849924 cri.go:89] found id: ""
	I1124 09:56:33.237811 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.237818 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:33.237824 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:33.237885 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:33.262147 1849924 cri.go:89] found id: ""
	I1124 09:56:33.262160 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.262167 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:33.262172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:33.262230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:33.286667 1849924 cri.go:89] found id: ""
	I1124 09:56:33.286681 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.286690 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:33.286696 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:33.286754 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:33.311109 1849924 cri.go:89] found id: ""
	I1124 09:56:33.311122 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.311129 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:33.311135 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:33.311198 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:33.336757 1849924 cri.go:89] found id: ""
	I1124 09:56:33.336781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.336790 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:33.336796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:33.336864 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:33.365159 1849924 cri.go:89] found id: ""
	I1124 09:56:33.365172 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.365179 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:33.365186 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:33.365197 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:33.393002 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:33.393017 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:33.457704 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:33.457724 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:33.473674 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:33.473700 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:33.547251 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:33.547261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:33.547274 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.125180 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:36.135549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:36.135611 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:36.161892 1849924 cri.go:89] found id: ""
	I1124 09:56:36.161906 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.161913 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:36.161919 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:36.161980 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:36.192254 1849924 cri.go:89] found id: ""
	I1124 09:56:36.192268 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.192275 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:36.192280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:36.192341 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:36.219675 1849924 cri.go:89] found id: ""
	I1124 09:56:36.219689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.219696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:36.219702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:36.219760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:36.249674 1849924 cri.go:89] found id: ""
	I1124 09:56:36.249688 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.249695 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:36.249700 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:36.249756 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:36.276115 1849924 cri.go:89] found id: ""
	I1124 09:56:36.276129 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.276136 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:36.276141 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:36.276199 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:36.303472 1849924 cri.go:89] found id: ""
	I1124 09:56:36.303486 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.303494 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:36.303499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:36.303558 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:36.332774 1849924 cri.go:89] found id: ""
	I1124 09:56:36.332789 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.332796 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:36.332804 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:36.332814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.410262 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:36.410282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:36.442608 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:36.442625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:36.517228 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:36.517247 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:36.532442 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:36.532459 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:36.598941 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.099623 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:39.110286 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:39.110347 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:39.135094 1849924 cri.go:89] found id: ""
	I1124 09:56:39.135108 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.135115 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:39.135120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:39.135184 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:39.161664 1849924 cri.go:89] found id: ""
	I1124 09:56:39.161678 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.161685 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:39.161691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:39.161749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:39.186843 1849924 cri.go:89] found id: ""
	I1124 09:56:39.186857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.186865 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:39.186870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:39.186930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:39.212864 1849924 cri.go:89] found id: ""
	I1124 09:56:39.212878 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.212889 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:39.212895 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:39.212953 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:39.243329 1849924 cri.go:89] found id: ""
	I1124 09:56:39.243343 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.243350 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:39.243356 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:39.243421 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:39.268862 1849924 cri.go:89] found id: ""
	I1124 09:56:39.268875 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.268883 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:39.268888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:39.268950 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:39.295966 1849924 cri.go:89] found id: ""
	I1124 09:56:39.295979 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.295986 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:39.295993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:39.296004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:39.327310 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:39.327325 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:39.392831 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:39.392850 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:39.407904 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:39.407920 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:39.476692 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.476716 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:39.476729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.055953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:42.067687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:42.067767 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:42.096948 1849924 cri.go:89] found id: ""
	I1124 09:56:42.096963 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.096971 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:42.096977 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:42.097039 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:42.128766 1849924 cri.go:89] found id: ""
	I1124 09:56:42.128781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.128789 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:42.128795 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:42.128861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:42.160266 1849924 cri.go:89] found id: ""
	I1124 09:56:42.160283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.160291 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:42.160297 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:42.160368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:42.191973 1849924 cri.go:89] found id: ""
	I1124 09:56:42.191996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.192004 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:42.192011 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:42.192081 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:42.226204 1849924 cri.go:89] found id: ""
	I1124 09:56:42.226218 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.226226 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:42.226232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:42.226316 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:42.253907 1849924 cri.go:89] found id: ""
	I1124 09:56:42.253922 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.253929 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:42.253935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:42.253998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:42.282770 1849924 cri.go:89] found id: ""
	I1124 09:56:42.282786 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.282793 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:42.282800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:42.282811 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:42.298712 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:42.298729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:42.363239 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:42.363249 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:42.363260 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.437643 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:42.437663 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:42.475221 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:42.475237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:45.048529 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:45.067334 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:45.067432 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:45.099636 1849924 cri.go:89] found id: ""
	I1124 09:56:45.099652 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.099659 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:45.099666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:45.099762 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:45.132659 1849924 cri.go:89] found id: ""
	I1124 09:56:45.132693 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.132701 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:45.132708 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:45.132792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:45.169282 1849924 cri.go:89] found id: ""
	I1124 09:56:45.169306 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.169314 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:45.169320 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:45.169398 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:45.226517 1849924 cri.go:89] found id: ""
	I1124 09:56:45.226533 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.226542 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:45.226548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:45.226626 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:45.265664 1849924 cri.go:89] found id: ""
	I1124 09:56:45.265680 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.265687 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:45.265693 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:45.265759 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:45.298503 1849924 cri.go:89] found id: ""
	I1124 09:56:45.298517 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.298525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:45.298531 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:45.298599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:45.329403 1849924 cri.go:89] found id: ""
	I1124 09:56:45.329436 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.329445 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:45.329453 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:45.329464 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:45.345344 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:45.345361 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:45.412742 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:45.412752 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:45.412763 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:45.493978 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:45.493998 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:45.531425 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:45.531441 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.098018 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:48.108764 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:48.108836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:48.134307 1849924 cri.go:89] found id: ""
	I1124 09:56:48.134321 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.134328 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:48.134333 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:48.134390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:48.159252 1849924 cri.go:89] found id: ""
	I1124 09:56:48.159266 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.159273 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:48.159279 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:48.159337 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:48.184464 1849924 cri.go:89] found id: ""
	I1124 09:56:48.184478 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.184496 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:48.184507 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:48.184589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:48.209500 1849924 cri.go:89] found id: ""
	I1124 09:56:48.209513 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.209520 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:48.209526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:48.209590 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:48.236025 1849924 cri.go:89] found id: ""
	I1124 09:56:48.236039 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.236045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:48.236051 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:48.236121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:48.262196 1849924 cri.go:89] found id: ""
	I1124 09:56:48.262210 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.262216 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:48.262222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:48.262285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:48.286684 1849924 cri.go:89] found id: ""
	I1124 09:56:48.286698 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.286705 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:48.286712 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:48.286725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.354155 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:48.354174 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:48.369606 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:48.369625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:48.436183 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:48.436193 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:48.436207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:48.516667 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:48.516688 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.047020 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:51.057412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:51.057477 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:51.087137 1849924 cri.go:89] found id: ""
	I1124 09:56:51.087159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.087167 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:51.087172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:51.087241 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:51.115003 1849924 cri.go:89] found id: ""
	I1124 09:56:51.115018 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.115025 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:51.115031 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:51.115093 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:51.144604 1849924 cri.go:89] found id: ""
	I1124 09:56:51.144622 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.144631 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:51.144638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:51.144706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:51.172310 1849924 cri.go:89] found id: ""
	I1124 09:56:51.172323 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.172338 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:51.172345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:51.172413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:51.200354 1849924 cri.go:89] found id: ""
	I1124 09:56:51.200376 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.200384 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:51.200390 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:51.200463 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:51.225889 1849924 cri.go:89] found id: ""
	I1124 09:56:51.225903 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.225911 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:51.225917 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:51.225974 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:51.250937 1849924 cri.go:89] found id: ""
	I1124 09:56:51.250950 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.250956 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:51.250972 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:51.250984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.281935 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:51.281951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:51.346955 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:51.346975 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:51.362412 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:51.362428 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:51.424513 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:51.424523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:51.424534 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.006160 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:54.017499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:54.017565 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:54.048035 1849924 cri.go:89] found id: ""
	I1124 09:56:54.048049 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.048056 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:54.048062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:54.048117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:54.075193 1849924 cri.go:89] found id: ""
	I1124 09:56:54.075207 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.075214 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:54.075220 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:54.075278 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:54.101853 1849924 cri.go:89] found id: ""
	I1124 09:56:54.101868 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.101875 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:54.101880 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:54.101938 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:54.128585 1849924 cri.go:89] found id: ""
	I1124 09:56:54.128600 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.128608 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:54.128614 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:54.128673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:54.154726 1849924 cri.go:89] found id: ""
	I1124 09:56:54.154742 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.154750 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:54.154756 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:54.154819 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:54.180936 1849924 cri.go:89] found id: ""
	I1124 09:56:54.180975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.180984 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:54.180990 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:54.181070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:54.209038 1849924 cri.go:89] found id: ""
	I1124 09:56:54.209060 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.209067 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:54.209075 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:54.209085 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:54.279263 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:54.279289 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:54.295105 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:54.295131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:54.367337 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:54.367348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:54.367360 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.442973 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:54.442995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:56.980627 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:56.990375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:56.990434 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:57.016699 1849924 cri.go:89] found id: ""
	I1124 09:56:57.016713 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.016720 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:57.016726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:57.016789 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:57.042924 1849924 cri.go:89] found id: ""
	I1124 09:56:57.042938 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.042945 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:57.042950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:57.043009 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:57.071972 1849924 cri.go:89] found id: ""
	I1124 09:56:57.071986 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.071993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:57.071998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:57.072057 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:57.097765 1849924 cri.go:89] found id: ""
	I1124 09:56:57.097780 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.097789 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:57.097796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:57.097861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:57.124764 1849924 cri.go:89] found id: ""
	I1124 09:56:57.124778 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.124796 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:57.124802 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:57.124871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:57.151558 1849924 cri.go:89] found id: ""
	I1124 09:56:57.151584 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.151591 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:57.151597 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:57.151667 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:57.178335 1849924 cri.go:89] found id: ""
	I1124 09:56:57.178348 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.178355 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:57.178372 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:57.178383 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:57.253968 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:57.253988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:57.284364 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:57.284380 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:57.349827 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:57.349847 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:57.364617 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:57.364633 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:57.425688 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:59.926489 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:59.936801 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:59.936870 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:59.961715 1849924 cri.go:89] found id: ""
	I1124 09:56:59.961728 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.961735 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:59.961741 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:59.961801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:59.990466 1849924 cri.go:89] found id: ""
	I1124 09:56:59.990480 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.990488 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:59.990494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:59.990554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:00.129137 1849924 cri.go:89] found id: ""
	I1124 09:57:00.129161 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.129169 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:00.129175 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:00.129257 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:00.211462 1849924 cri.go:89] found id: ""
	I1124 09:57:00.211478 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.211490 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:00.211506 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:00.211593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:00.274315 1849924 cri.go:89] found id: ""
	I1124 09:57:00.274338 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.274346 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:00.274363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:00.274453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:00.321199 1849924 cri.go:89] found id: ""
	I1124 09:57:00.321233 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.321241 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:00.321247 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:00.321324 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:00.372845 1849924 cri.go:89] found id: ""
	I1124 09:57:00.372861 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.372869 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:00.372878 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:00.372889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:00.444462 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:00.444485 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:00.465343 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:00.465381 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:00.553389 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:00.553402 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:00.553418 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:00.632199 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:00.632219 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:03.162773 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:03.173065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:03.173150 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:03.200418 1849924 cri.go:89] found id: ""
	I1124 09:57:03.200431 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.200439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:03.200444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:03.200502 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:03.227983 1849924 cri.go:89] found id: ""
	I1124 09:57:03.227997 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.228004 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:03.228009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:03.228070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:03.257554 1849924 cri.go:89] found id: ""
	I1124 09:57:03.257568 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.257575 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:03.257581 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:03.257639 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:03.283198 1849924 cri.go:89] found id: ""
	I1124 09:57:03.283210 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.283217 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:03.283223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:03.283280 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:03.307981 1849924 cri.go:89] found id: ""
	I1124 09:57:03.307994 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.308002 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:03.308007 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:03.308063 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:03.337021 1849924 cri.go:89] found id: ""
	I1124 09:57:03.337035 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.337042 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:03.337047 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:03.337130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:03.362116 1849924 cri.go:89] found id: ""
	I1124 09:57:03.362130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.362137 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:03.362144 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:03.362155 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:03.427932 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:03.427951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:03.442952 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:03.442968 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:03.527978 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:03.527989 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:03.528002 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:03.603993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:03.604012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.134966 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:06.147607 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:06.147673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:06.173217 1849924 cri.go:89] found id: ""
	I1124 09:57:06.173231 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.173238 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:06.173243 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:06.173302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:06.203497 1849924 cri.go:89] found id: ""
	I1124 09:57:06.203511 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.203518 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:06.203524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:06.203581 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:06.232192 1849924 cri.go:89] found id: ""
	I1124 09:57:06.232205 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.232212 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:06.232219 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:06.232276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:06.261698 1849924 cri.go:89] found id: ""
	I1124 09:57:06.261711 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.261717 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:06.261723 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:06.261779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:06.286623 1849924 cri.go:89] found id: ""
	I1124 09:57:06.286642 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.286650 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:06.286656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:06.286717 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:06.316085 1849924 cri.go:89] found id: ""
	I1124 09:57:06.316098 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.316105 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:06.316110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:06.316169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:06.344243 1849924 cri.go:89] found id: ""
	I1124 09:57:06.344257 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.344264 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:06.344273 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:06.344283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.375793 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:06.375809 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:06.441133 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:06.441160 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:06.457259 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:06.457282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:06.534017 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:06.534028 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:06.534040 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.110740 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:09.122421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:09.122484 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:09.148151 1849924 cri.go:89] found id: ""
	I1124 09:57:09.148165 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.148172 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:09.148177 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:09.148235 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:09.173265 1849924 cri.go:89] found id: ""
	I1124 09:57:09.173279 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.173288 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:09.173295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:09.173357 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:09.198363 1849924 cri.go:89] found id: ""
	I1124 09:57:09.198377 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.198384 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:09.198389 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:09.198447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:09.224567 1849924 cri.go:89] found id: ""
	I1124 09:57:09.224581 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.224588 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:09.224594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:09.224652 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:09.249182 1849924 cri.go:89] found id: ""
	I1124 09:57:09.249195 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.249205 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:09.249210 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:09.249281 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:09.274039 1849924 cri.go:89] found id: ""
	I1124 09:57:09.274053 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.274060 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:09.274065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:09.274125 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:09.299730 1849924 cri.go:89] found id: ""
	I1124 09:57:09.299744 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.299751 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:09.299758 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:09.299770 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:09.364094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:09.364105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:09.364120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.441482 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:09.441504 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:09.479944 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:09.479961 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:09.549349 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:09.549367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:12.064927 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:12.075315 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:12.075376 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:12.103644 1849924 cri.go:89] found id: ""
	I1124 09:57:12.103658 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.103665 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:12.103670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:12.103774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:12.129120 1849924 cri.go:89] found id: ""
	I1124 09:57:12.129134 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.129141 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:12.129147 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:12.129215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:12.156010 1849924 cri.go:89] found id: ""
	I1124 09:57:12.156024 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.156031 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:12.156036 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:12.156094 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:12.184275 1849924 cri.go:89] found id: ""
	I1124 09:57:12.184289 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.184296 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:12.184301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:12.184362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:12.214700 1849924 cri.go:89] found id: ""
	I1124 09:57:12.214713 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.214726 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:12.214732 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:12.214792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:12.239546 1849924 cri.go:89] found id: ""
	I1124 09:57:12.239559 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.239566 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:12.239572 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:12.239635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:12.264786 1849924 cri.go:89] found id: ""
	I1124 09:57:12.264800 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.264806 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:12.264814 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:12.264826 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:12.324457 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:12.324467 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:12.324477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:12.401396 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:12.401417 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:12.432520 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:12.432535 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:12.502857 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:12.502877 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.018809 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:15.038661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:15.038741 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:15.069028 1849924 cri.go:89] found id: ""
	I1124 09:57:15.069043 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.069050 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:15.069056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:15.069139 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:15.096495 1849924 cri.go:89] found id: ""
	I1124 09:57:15.096513 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.096521 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:15.096526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:15.096593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:15.125417 1849924 cri.go:89] found id: ""
	I1124 09:57:15.125430 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.125438 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:15.125444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:15.125508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:15.152259 1849924 cri.go:89] found id: ""
	I1124 09:57:15.152274 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.152281 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:15.152287 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:15.152348 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:15.178920 1849924 cri.go:89] found id: ""
	I1124 09:57:15.178934 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.178942 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:15.178947 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:15.179024 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:15.207630 1849924 cri.go:89] found id: ""
	I1124 09:57:15.207643 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.207650 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:15.207656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:15.207715 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:15.237971 1849924 cri.go:89] found id: ""
	I1124 09:57:15.237985 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.237992 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:15.238000 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:15.238011 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:15.305169 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:15.305187 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.320240 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:15.320257 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:15.393546 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:15.393556 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:15.393592 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:15.470159 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:15.470179 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:18.001255 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:18.013421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:18.013488 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:18.040787 1849924 cri.go:89] found id: ""
	I1124 09:57:18.040801 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.040808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:18.040814 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:18.040873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:18.066460 1849924 cri.go:89] found id: ""
	I1124 09:57:18.066475 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.066482 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:18.066487 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:18.066544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:18.093970 1849924 cri.go:89] found id: ""
	I1124 09:57:18.093983 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.093990 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:18.093998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:18.094070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:18.119292 1849924 cri.go:89] found id: ""
	I1124 09:57:18.119306 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.119312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:18.119318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:18.119375 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:18.144343 1849924 cri.go:89] found id: ""
	I1124 09:57:18.144356 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.144363 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:18.144369 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:18.144428 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:18.176349 1849924 cri.go:89] found id: ""
	I1124 09:57:18.176362 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.176369 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:18.176375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:18.176435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:18.200900 1849924 cri.go:89] found id: ""
	I1124 09:57:18.200913 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.200920 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:18.200927 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:18.200938 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:18.266434 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:18.266452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:18.281611 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:18.281627 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:18.347510 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:18.347523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:18.347536 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:18.435234 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:18.435254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:20.973569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:20.984347 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:20.984418 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:21.011115 1849924 cri.go:89] found id: ""
	I1124 09:57:21.011130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.011137 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:21.011142 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:21.011204 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:21.041877 1849924 cri.go:89] found id: ""
	I1124 09:57:21.041891 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.041899 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:21.041904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:21.041963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:21.067204 1849924 cri.go:89] found id: ""
	I1124 09:57:21.067217 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.067224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:21.067229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:21.067288 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:21.096444 1849924 cri.go:89] found id: ""
	I1124 09:57:21.096458 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.096464 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:21.096470 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:21.096526 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:21.122011 1849924 cri.go:89] found id: ""
	I1124 09:57:21.122025 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.122033 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:21.122038 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:21.122098 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:21.150504 1849924 cri.go:89] found id: ""
	I1124 09:57:21.150518 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.150525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:21.150530 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:21.150601 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:21.179560 1849924 cri.go:89] found id: ""
	I1124 09:57:21.179573 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.179579 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:21.179587 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:21.179597 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:21.263112 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:21.263134 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:21.291875 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:21.291891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:21.358120 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:21.358139 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:21.373381 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:21.373401 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:21.437277 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:23.938404 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:23.948703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:23.948770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:23.975638 1849924 cri.go:89] found id: ""
	I1124 09:57:23.975653 1849924 logs.go:282] 0 containers: []
	W1124 09:57:23.975660 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:23.975666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:23.975797 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:24.003099 1849924 cri.go:89] found id: ""
	I1124 09:57:24.003114 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.003122 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:24.003127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:24.003195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:24.031320 1849924 cri.go:89] found id: ""
	I1124 09:57:24.031333 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.031340 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:24.031345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:24.031412 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:24.057464 1849924 cri.go:89] found id: ""
	I1124 09:57:24.057479 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.057486 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:24.057491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:24.057560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:24.083571 1849924 cri.go:89] found id: ""
	I1124 09:57:24.083586 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.083593 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:24.083598 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:24.083656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:24.109710 1849924 cri.go:89] found id: ""
	I1124 09:57:24.109724 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.109732 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:24.109737 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:24.109810 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:24.134957 1849924 cri.go:89] found id: ""
	I1124 09:57:24.134971 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.134978 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:24.134985 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:24.134995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:24.206698 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:24.206725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:24.221977 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:24.221995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:24.287450 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:24.287461 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:24.287474 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:24.364870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:24.364890 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:26.899825 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:26.911192 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:26.911260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:26.937341 1849924 cri.go:89] found id: ""
	I1124 09:57:26.937355 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.937361 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:26.937367 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:26.937429 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:26.966037 1849924 cri.go:89] found id: ""
	I1124 09:57:26.966050 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.966057 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:26.966062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:26.966119 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:26.994487 1849924 cri.go:89] found id: ""
	I1124 09:57:26.994501 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.994508 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:26.994514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:26.994572 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:27.024331 1849924 cri.go:89] found id: ""
	I1124 09:57:27.024345 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.024351 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:27.024357 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:27.024414 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:27.051922 1849924 cri.go:89] found id: ""
	I1124 09:57:27.051936 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.051943 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:27.051949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:27.052007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:27.079084 1849924 cri.go:89] found id: ""
	I1124 09:57:27.079097 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.079104 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:27.079110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:27.079166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:27.105333 1849924 cri.go:89] found id: ""
	I1124 09:57:27.105346 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.105362 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:27.105371 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:27.105399 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:27.136135 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:27.136151 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:27.202777 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:27.202797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:27.218147 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:27.218169 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:27.287094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:27.287105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:27.287116 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:29.863883 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:29.874162 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:29.874270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:29.899809 1849924 cri.go:89] found id: ""
	I1124 09:57:29.899825 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.899833 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:29.899839 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:29.899897 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:29.925268 1849924 cri.go:89] found id: ""
	I1124 09:57:29.925282 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.925289 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:29.925295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:29.925355 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:29.953756 1849924 cri.go:89] found id: ""
	I1124 09:57:29.953770 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.953778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:29.953783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:29.953844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:29.979723 1849924 cri.go:89] found id: ""
	I1124 09:57:29.979737 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.979744 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:29.979750 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:29.979809 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:30.029207 1849924 cri.go:89] found id: ""
	I1124 09:57:30.029223 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.029231 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:30.029237 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:30.029307 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:30.086347 1849924 cri.go:89] found id: ""
	I1124 09:57:30.086364 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.086374 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:30.086381 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:30.086453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:30.117385 1849924 cri.go:89] found id: ""
	I1124 09:57:30.117412 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.117420 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:30.117429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:30.117442 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:30.134069 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:30.134089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:30.200106 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:30.200116 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:30.200131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:30.277714 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:30.277734 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:30.306530 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:30.306548 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:32.873889 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:32.884169 1849924 kubeadm.go:602] duration metric: took 4m3.946947382s to restartPrimaryControlPlane
	W1124 09:57:32.884229 1849924 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 09:57:32.884313 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 09:57:33.294612 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:57:33.307085 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:57:33.314867 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:57:33.314936 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:57:33.322582 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:57:33.322593 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 09:57:33.322667 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:57:33.330196 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:57:33.330260 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:57:33.337917 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:57:33.345410 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:57:33.345471 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:57:33.352741 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.360084 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:57:33.360141 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.367359 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:57:33.374680 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:57:33.374740 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:57:33.381720 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:57:33.421475 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:57:33.421672 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:57:33.492568 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:57:33.492631 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:57:33.492668 1849924 kubeadm.go:319] OS: Linux
	I1124 09:57:33.492712 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:57:33.492759 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:57:33.492805 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:57:33.492852 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:57:33.492898 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:57:33.492945 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:57:33.492989 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:57:33.493036 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:57:33.493080 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:57:33.559811 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:57:33.559935 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:57:33.560031 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:57:33.569641 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:57:33.572593 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 09:57:33.572694 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:57:33.572778 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:57:33.572897 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:57:33.572970 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:57:33.573053 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:57:33.573134 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:57:33.573209 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:57:33.573281 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:57:33.573362 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:57:33.573444 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:57:33.573489 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:57:33.573554 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:57:34.404229 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:57:34.574070 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:57:34.974228 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:57:35.133185 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:57:35.260833 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:57:35.261355 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:57:35.265684 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:57:35.269119 1849924 out.go:252]   - Booting up control plane ...
	I1124 09:57:35.269213 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:57:35.269289 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:57:35.269807 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:57:35.284618 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:57:35.284910 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:57:35.293324 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:57:35.293620 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:57:35.293661 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:57:35.424973 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:57:35.425087 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:01:35.425195 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000242606s
	I1124 10:01:35.425226 1849924 kubeadm.go:319] 
	I1124 10:01:35.425316 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:01:35.425374 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:01:35.425488 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:01:35.425495 1849924 kubeadm.go:319] 
	I1124 10:01:35.425617 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:01:35.425655 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:01:35.425685 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:01:35.425690 1849924 kubeadm.go:319] 
	I1124 10:01:35.429378 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:01:35.429792 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:01:35.429899 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:01:35.430134 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:01:35.430138 1849924 kubeadm.go:319] 
	I1124 10:01:35.430206 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1124 10:01:35.430308 1849924 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242606s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1124 10:01:35.430396 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 10:01:35.837421 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:01:35.850299 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:01:35.850356 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:01:35.858169 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:01:35.858180 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 10:01:35.858230 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 10:01:35.866400 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:01:35.866456 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:01:35.873856 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 10:01:35.881958 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:01:35.882015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:01:35.889339 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.896920 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:01:35.896977 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.904670 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 10:01:35.912117 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:01:35.912171 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:01:35.919741 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:01:35.956259 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 10:01:35.956313 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:01:36.031052 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:01:36.031118 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:01:36.031152 1849924 kubeadm.go:319] OS: Linux
	I1124 10:01:36.031196 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:01:36.031243 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:01:36.031289 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:01:36.031336 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:01:36.031383 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:01:36.031430 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:01:36.031474 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:01:36.031521 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:01:36.031566 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:01:36.099190 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:01:36.099321 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:01:36.099441 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 10:01:36.106857 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:01:36.112186 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 10:01:36.112274 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:01:36.112337 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:01:36.112413 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 10:01:36.112473 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 10:01:36.112542 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 10:01:36.112594 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 10:01:36.112656 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 10:01:36.112719 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 10:01:36.112792 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 10:01:36.112863 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 10:01:36.112900 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 10:01:36.112954 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:01:36.197295 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:01:36.531352 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 10:01:36.984185 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:01:37.290064 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:01:37.558441 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:01:37.559017 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:01:37.561758 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:01:37.564997 1849924 out.go:252]   - Booting up control plane ...
	I1124 10:01:37.565117 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:01:37.565200 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:01:37.566811 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:01:37.581952 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:01:37.582056 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 10:01:37.589882 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 10:01:37.590273 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:01:37.590483 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:01:37.733586 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 10:01:37.733692 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:05:37.728742 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000440097s
	I1124 10:05:37.728760 1849924 kubeadm.go:319] 
	I1124 10:05:37.729148 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:05:37.729217 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:05:37.729548 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:05:37.729554 1849924 kubeadm.go:319] 
	I1124 10:05:37.729744 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:05:37.729799 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:05:37.729853 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:05:37.729860 1849924 kubeadm.go:319] 
	I1124 10:05:37.734894 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:05:37.735345 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:05:37.735452 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:05:37.735693 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:05:37.735697 1849924 kubeadm.go:319] 
	I1124 10:05:37.735773 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 10:05:37.735829 1849924 kubeadm.go:403] duration metric: took 12m8.833752588s to StartCluster
	I1124 10:05:37.735872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:05:37.735930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:05:37.769053 1849924 cri.go:89] found id: ""
	I1124 10:05:37.769070 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.769076 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:05:37.769083 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:05:37.769166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:05:37.796753 1849924 cri.go:89] found id: ""
	I1124 10:05:37.796767 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.796774 1849924 logs.go:284] No container was found matching "etcd"
	I1124 10:05:37.796780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:05:37.796839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:05:37.822456 1849924 cri.go:89] found id: ""
	I1124 10:05:37.822470 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.822487 1849924 logs.go:284] No container was found matching "coredns"
	I1124 10:05:37.822492 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:05:37.822556 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:05:37.847572 1849924 cri.go:89] found id: ""
	I1124 10:05:37.847587 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.847594 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:05:37.847601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:05:37.847660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:05:37.874600 1849924 cri.go:89] found id: ""
	I1124 10:05:37.874614 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.874621 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:05:37.874630 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:05:37.874694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:05:37.899198 1849924 cri.go:89] found id: ""
	I1124 10:05:37.899212 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.899220 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:05:37.899226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:05:37.899286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:05:37.927492 1849924 cri.go:89] found id: ""
	I1124 10:05:37.927506 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.927513 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 10:05:37.927521 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 10:05:37.927531 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:05:37.996934 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 10:05:37.996954 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:05:38.018248 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:05:38.018265 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:05:38.095385 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:05:38.095401 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:05:38.095411 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:05:38.170993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 10:05:38.171016 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:05:38.204954 1849924 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 10:05:38.205004 1849924 out.go:285] * 
	W1124 10:05:38.205075 1849924 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.205091 1849924 out.go:285] * 
	W1124 10:05:38.207567 1849924 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:05:38.212617 1849924 out.go:203] 
	W1124 10:05:38.216450 1849924 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.216497 1849924 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 10:05:38.216516 1849924 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 10:05:38.219595 1849924 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.571892719Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=aef09199-0d9c-4fcd-a86e-4644b84003d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601271581Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601433691Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.60148682Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.673335433Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=df47687b-4b6a-4acb-8d1e-f46521441883 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.702968936Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703135847Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703183371Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.732961212Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.733138962Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.7331819Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.260656424Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=d07a4d73-f74e-45cd-9c4d-fd518a9e69a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301046166Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301216031Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.3012543Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.339616029Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.33997436Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.340022221Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.333274376Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=0d853bf6-0cff-41f5-a62e-2b21fedcbf72 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366217435Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366382032Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366430164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.391919753Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392065551Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392106118Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:08:12.084394   24758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:12.085003   24758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:12.086548   24758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:12.087012   24758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:08:12.088440   24758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:08:12 up  8:50,  0 user,  load average: 1.17, 0.51, 0.46
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:08:09 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:08:10 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1164.
	Nov 24 10:08:10 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:10 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:10 functional-373432 kubelet[24614]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:10 functional-373432 kubelet[24614]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:10 functional-373432 kubelet[24614]: E1124 10:08:10.267832   24614 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:08:10 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:08:10 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:08:10 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1165.
	Nov 24 10:08:10 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:10 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:11 functional-373432 kubelet[24652]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:11 functional-373432 kubelet[24652]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:11 functional-373432 kubelet[24652]: E1124 10:08:11.020586   24652 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:08:11 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:08:11 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:08:11 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1166.
	Nov 24 10:08:11 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:11 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:08:11 functional-373432 kubelet[24675]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:11 functional-373432 kubelet[24675]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:08:11 functional-373432 kubelet[24675]: E1124 10:08:11.756648   24675 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:08:11 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:08:11 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (344.886062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-373432 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-373432 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (55.009487ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-373432 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-373432 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-373432 describe po hello-node-connect: exit status 1 (57.98097ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-373432 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-373432 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-373432 logs -l app=hello-node-connect: exit status 1 (57.527601ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-373432 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-373432 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-373432 describe svc hello-node-connect: exit status 1 (63.302967ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-373432 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (307.748165ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image ls                                                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image ls                                                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo cat /etc/ssl/certs/1806704.pem                                                                                                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image ls                                                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo cat /usr/share/ca-certificates/1806704.pem                                                                                     │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image save kicbase/echo-server:functional-373432 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image rm kicbase/echo-server:functional-373432 --alsologtostderr                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo cat /etc/ssl/certs/18067042.pem                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image ls                                                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo cat /usr/share/ca-certificates/18067042.pem                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image save --daemon kicbase/echo-server:functional-373432 --alsologtostderr                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo cat /etc/test/nested/copy/1806704/hosts                                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh echo hello                                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ tunnel  │ functional-373432 tunnel --alsologtostderr                                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │                     │
	│ tunnel  │ functional-373432 tunnel --alsologtostderr                                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │                     │
	│ ssh     │ functional-373432 ssh cat /etc/hostname                                                                                                                   │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ tunnel  │ functional-373432 tunnel --alsologtostderr                                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │                     │
	│ addons  │ functional-373432 addons list                                                                                                                             │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:07 UTC │ 24 Nov 25 10:07 UTC │
	│ addons  │ functional-373432 addons list -o json                                                                                                                     │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:07 UTC │ 24 Nov 25 10:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:53:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:53:23.394373 1849924 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:53:23.394473 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394476 1849924 out.go:374] Setting ErrFile to fd 2...
	I1124 09:53:23.394480 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394868 1849924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:53:23.395314 1849924 out.go:368] Setting JSON to false
	I1124 09:53:23.396438 1849924 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30954,"bootTime":1763947050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:53:23.396523 1849924 start.go:143] virtualization:  
	I1124 09:53:23.399850 1849924 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:53:23.403618 1849924 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:53:23.403698 1849924 notify.go:221] Checking for updates...
	I1124 09:53:23.409546 1849924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:53:23.412497 1849924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:53:23.415264 1849924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:53:23.418109 1849924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:53:23.420908 1849924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:53:23.424158 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:23.424263 1849924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:53:23.449398 1849924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:53:23.449524 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.505939 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.496540271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.506033 1849924 docker.go:319] overlay module found
	I1124 09:53:23.509224 1849924 out.go:179] * Using the docker driver based on existing profile
	I1124 09:53:23.512245 1849924 start.go:309] selected driver: docker
	I1124 09:53:23.512255 1849924 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.512340 1849924 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:53:23.512454 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.568317 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.558792888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.568738 1849924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:53:23.568763 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:23.568821 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:23.568862 1849924 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.571988 1849924 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:53:23.574929 1849924 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:53:23.577959 1849924 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:53:23.580671 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:23.580735 1849924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:53:23.600479 1849924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:53:23.600490 1849924 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:53:23.634350 1849924 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:53:24.054820 1849924 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:53:24.054990 1849924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:53:24.055122 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.055240 1849924 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:53:24.055269 1849924 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.055313 1849924 start.go:364] duration metric: took 27.192µs to acquireMachinesLock for "functional-373432"
	I1124 09:53:24.055327 1849924 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:53:24.055331 1849924 fix.go:54] fixHost starting: 
	I1124 09:53:24.055580 1849924 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:53:24.072844 1849924 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:53:24.072865 1849924 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:53:24.076050 1849924 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:53:24.076079 1849924 machine.go:94] provisionDockerMachine start ...
	I1124 09:53:24.076162 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.100870 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.101221 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.101228 1849924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:53:24.232623 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.252893 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.252907 1849924 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:53:24.252988 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.280057 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.280362 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.280376 1849924 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:53:24.402975 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.467980 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.468079 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.499770 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.500067 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.500084 1849924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:53:24.556663 1849924 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556759 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:53:24.556767 1849924 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.133µs
	I1124 09:53:24.556774 1849924 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:53:24.556785 1849924 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556814 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:53:24.556818 1849924 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.266µs
	I1124 09:53:24.556823 1849924 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556832 1849924 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556867 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:53:24.556871 1849924 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 39.738µs
	I1124 09:53:24.556876 1849924 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556884 1849924 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556911 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:53:24.556915 1849924 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.655µs
	I1124 09:53:24.556920 1849924 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556934 1849924 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556959 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:53:24.556963 1849924 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 35.948µs
	I1124 09:53:24.556967 1849924 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556975 1849924 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556999 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:53:24.557011 1849924 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.226µs
	I1124 09:53:24.557015 1849924 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:53:24.557023 1849924 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557048 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:53:24.557051 1849924 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 29.202µs
	I1124 09:53:24.557056 1849924 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:53:24.557065 1849924 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557089 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:53:24.557093 1849924 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.258µs
	I1124 09:53:24.557097 1849924 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:53:24.557129 1849924 cache.go:87] Successfully saved all images to host disk.
	I1124 09:53:24.653937 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:53:24.653952 1849924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:53:24.653984 1849924 ubuntu.go:190] setting up certificates
	I1124 09:53:24.653993 1849924 provision.go:84] configureAuth start
	I1124 09:53:24.654058 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:24.671316 1849924 provision.go:143] copyHostCerts
	I1124 09:53:24.671391 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:53:24.671399 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:53:24.671473 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:53:24.671573 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:53:24.671577 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:53:24.671611 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:53:24.671659 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:53:24.671662 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:53:24.671684 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:53:24.671727 1849924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:53:25.074688 1849924 provision.go:177] copyRemoteCerts
	I1124 09:53:25.074752 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:53:25.074789 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.095886 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.200905 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:53:25.221330 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:53:25.243399 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:53:25.263746 1849924 provision.go:87] duration metric: took 609.720286ms to configureAuth
	I1124 09:53:25.263762 1849924 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:53:25.263945 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:25.264045 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.283450 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:25.283754 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:25.283770 1849924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:53:25.632249 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:53:25.632261 1849924 machine.go:97] duration metric: took 1.556176004s to provisionDockerMachine
	I1124 09:53:25.632272 1849924 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:53:25.632283 1849924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:53:25.632368 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:53:25.632405 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.650974 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.756910 1849924 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:53:25.760285 1849924 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:53:25.760302 1849924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:53:25.760312 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:53:25.760370 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:53:25.760445 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:53:25.760518 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:53:25.760561 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:53:25.767953 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:25.785397 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:53:25.802531 1849924 start.go:296] duration metric: took 170.24573ms for postStartSetup
	I1124 09:53:25.802613 1849924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:53:25.802665 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.819451 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.922232 1849924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:53:25.926996 1849924 fix.go:56] duration metric: took 1.871657348s for fixHost
	I1124 09:53:25.927011 1849924 start.go:83] releasing machines lock for "functional-373432", held for 1.871691088s
	I1124 09:53:25.927085 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:25.943658 1849924 ssh_runner.go:195] Run: cat /version.json
	I1124 09:53:25.943696 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.943958 1849924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:53:25.944002 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.980808 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.985182 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:26.175736 1849924 ssh_runner.go:195] Run: systemctl --version
	I1124 09:53:26.181965 1849924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:53:26.217601 1849924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:53:26.221860 1849924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:53:26.221923 1849924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:53:26.229857 1849924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:53:26.229870 1849924 start.go:496] detecting cgroup driver to use...
	I1124 09:53:26.229899 1849924 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:53:26.229945 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:53:26.244830 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:53:26.257783 1849924 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:53:26.257835 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:53:26.273202 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:53:26.286089 1849924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:53:26.392939 1849924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:53:26.505658 1849924 docker.go:234] disabling docker service ...
	I1124 09:53:26.505717 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:53:26.520682 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:53:26.533901 1849924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:53:26.643565 1849924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:53:26.781643 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:53:26.794102 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:53:26.807594 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:26.964951 1849924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:53:26.965014 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.974189 1849924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:53:26.974248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.982757 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.991310 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.000248 1849924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:53:27.009837 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.019258 1849924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.028248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.037276 1849924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:53:27.045218 1849924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:53:27.052631 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:27.162722 1849924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:53:27.344834 1849924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:53:27.344893 1849924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:53:27.348791 1849924 start.go:564] Will wait 60s for crictl version
	I1124 09:53:27.348847 1849924 ssh_runner.go:195] Run: which crictl
	I1124 09:53:27.352314 1849924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:53:27.376797 1849924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:53:27.376884 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.404280 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.437171 1849924 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:53:27.439969 1849924 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:53:27.457621 1849924 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:53:27.466585 1849924 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 09:53:27.469312 1849924 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:53:27.469546 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.636904 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.787069 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.940573 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:27.940635 1849924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:53:27.974420 1849924 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:53:27.974431 1849924 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:53:27.974436 1849924 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:53:27.974527 1849924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:53:27.974612 1849924 ssh_runner.go:195] Run: crio config
	I1124 09:53:28.037679 1849924 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 09:53:28.037700 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:28.037709 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:28.037724 1849924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:53:28.037750 1849924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:53:28.037877 1849924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:53:28.037948 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:53:28.045873 1849924 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:53:28.045941 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:53:28.053444 1849924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:53:28.066325 1849924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:53:28.079790 1849924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1124 09:53:28.092701 1849924 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:53:28.096834 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:28.213078 1849924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:53:28.235943 1849924 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:53:28.235953 1849924 certs.go:195] generating shared ca certs ...
	I1124 09:53:28.235988 1849924 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:53:28.236165 1849924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:53:28.236216 1849924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:53:28.236222 1849924 certs.go:257] generating profile certs ...
	I1124 09:53:28.236320 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:53:28.236381 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:53:28.236430 1849924 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:53:28.236545 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:53:28.236581 1849924 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:53:28.236590 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:53:28.236617 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:53:28.236639 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:53:28.236676 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:53:28.236733 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:28.237452 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:53:28.267491 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:53:28.288261 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:53:28.304655 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:53:28.321607 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:53:28.339914 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:53:28.357697 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:53:28.374827 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:53:28.392170 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:53:28.410757 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:53:28.428776 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:53:28.446790 1849924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:53:28.459992 1849924 ssh_runner.go:195] Run: openssl version
	I1124 09:53:28.466084 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:53:28.474433 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478225 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478282 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.521415 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:53:28.529784 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:53:28.538178 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542108 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542164 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.583128 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:53:28.591113 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:53:28.599457 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603413 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603474 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.645543 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:53:28.653724 1849924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:53:28.657603 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:53:28.698734 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:53:28.739586 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:53:28.780289 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:53:28.820840 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:53:28.861343 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:53:28.902087 1849924 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:28.902167 1849924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:53:28.902236 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.929454 1849924 cri.go:89] found id: ""
	I1124 09:53:28.929519 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:53:28.937203 1849924 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:53:28.937213 1849924 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:53:28.937261 1849924 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:53:28.944668 1849924 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:28.945209 1849924 kubeconfig.go:125] found "functional-373432" server: "https://192.168.49.2:8441"
	I1124 09:53:28.946554 1849924 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:53:28.956044 1849924 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 09:38:48.454819060 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 09:53:28.085978644 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 09:53:28.956053 1849924 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:53:28.956064 1849924 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:53:28.956128 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.991786 1849924 cri.go:89] found id: ""
	I1124 09:53:28.991878 1849924 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:53:29.009992 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:53:29.018335 1849924 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 09:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 09:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Nov 24 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 24 09:42 /etc/kubernetes/scheduler.conf
	
	I1124 09:53:29.018393 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:53:29.026350 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:53:29.034215 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.034271 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:53:29.042061 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.049959 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.050015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.057477 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:53:29.065397 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.065453 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:53:29.072838 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:53:29.080812 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:29.126682 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:30.915283 1849924 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.788534288s)
	I1124 09:53:30.915375 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.124806 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.187302 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.234732 1849924 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:53:31.234802 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:31.735292 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.235922 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.735385 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.235894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.734984 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.235509 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.735644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.235724 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.235151 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.734994 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.235505 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.734925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.235891 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.235854 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.235929 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.734921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.234991 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.235015 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.734874 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.235403 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.734996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.235058 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.735496 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.235113 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.735894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.234930 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.735636 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.234914 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.734875 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.235656 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.735578 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.235469 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.735823 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.235926 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.235524 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.735679 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.235407 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.735614 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.235868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.734868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.235806 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.735801 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.235315 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.735919 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.735842 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.235491 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.235122 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.735029 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.235002 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.735695 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.236092 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.735024 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.235917 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.735341 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.235291 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.735026 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.235183 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.735898 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.235334 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.234896 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.735246 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.235531 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.235579 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.735599 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.234953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.734946 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.235705 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.735908 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.234909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.735831 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.235563 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.735909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.234992 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.735855 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.234936 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.734993 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.235585 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.235013 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.735371 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.235016 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.735593 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.735653 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.235793 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.734939 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.235317 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.735001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.235075 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.234969 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.735715 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.234859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.735010 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.235545 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.735305 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.235127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.734989 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.235601 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.734933 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.234986 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.735250 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.235727 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.734976 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.235644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.735675 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.735127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:31.234921 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:31.235007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:31.266239 1849924 cri.go:89] found id: ""
	I1124 09:54:31.266252 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.266259 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:31.266265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:31.266323 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:31.294586 1849924 cri.go:89] found id: ""
	I1124 09:54:31.294608 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.294616 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:31.294623 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:31.294694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:31.322061 1849924 cri.go:89] found id: ""
	I1124 09:54:31.322076 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.322083 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:31.322088 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:31.322159 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:31.349139 1849924 cri.go:89] found id: ""
	I1124 09:54:31.349154 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.349161 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:31.349167 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:31.349230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:31.379824 1849924 cri.go:89] found id: ""
	I1124 09:54:31.379838 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.379845 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:31.379850 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:31.379915 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:31.407206 1849924 cri.go:89] found id: ""
	I1124 09:54:31.407220 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.407228 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:31.407233 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:31.407296 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:31.435102 1849924 cri.go:89] found id: ""
	I1124 09:54:31.435117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.435123 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:31.435132 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:31.435143 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:31.504759 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:31.504779 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:31.520567 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:31.520584 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:31.587634 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:31.587666 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:31.587680 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:31.665843 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:31.665864 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.199426 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:34.210826 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:34.210886 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:34.249730 1849924 cri.go:89] found id: ""
	I1124 09:54:34.249743 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.249769 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:34.249774 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:34.249844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:34.279157 1849924 cri.go:89] found id: ""
	I1124 09:54:34.279171 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.279178 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:34.279183 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:34.279253 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:34.305617 1849924 cri.go:89] found id: ""
	I1124 09:54:34.305631 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.305655 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:34.305661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:34.305730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:34.331221 1849924 cri.go:89] found id: ""
	I1124 09:54:34.331235 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.331243 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:34.331249 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:34.331309 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:34.357361 1849924 cri.go:89] found id: ""
	I1124 09:54:34.357374 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.357381 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:34.357387 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:34.357447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:34.382790 1849924 cri.go:89] found id: ""
	I1124 09:54:34.382805 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.382812 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:34.382817 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:34.382882 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:34.408622 1849924 cri.go:89] found id: ""
	I1124 09:54:34.408635 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.408653 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:34.408661 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:34.408673 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:34.473355 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:34.473365 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:34.473376 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:34.560903 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:34.560924 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.589722 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:34.589738 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:34.659382 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:34.659407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.175501 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:37.187020 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:37.187082 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:37.215497 1849924 cri.go:89] found id: ""
	I1124 09:54:37.215511 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.215518 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:37.215524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:37.215584 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:37.252296 1849924 cri.go:89] found id: ""
	I1124 09:54:37.252310 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.252317 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:37.252323 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:37.252383 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:37.281216 1849924 cri.go:89] found id: ""
	I1124 09:54:37.281230 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.281237 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:37.281242 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:37.281302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:37.307335 1849924 cri.go:89] found id: ""
	I1124 09:54:37.307349 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.307356 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:37.307361 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:37.307435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:37.333186 1849924 cri.go:89] found id: ""
	I1124 09:54:37.333209 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.333217 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:37.333222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:37.333290 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:37.358046 1849924 cri.go:89] found id: ""
	I1124 09:54:37.358060 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.358068 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:37.358074 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:37.358130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:37.388252 1849924 cri.go:89] found id: ""
	I1124 09:54:37.388265 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.388273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:37.388280 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:37.388291 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:37.423715 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:37.423740 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:37.490800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:37.490819 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.506370 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:37.506387 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:37.571587 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:37.571597 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:37.571608 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.152603 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:40.164138 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:40.164210 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:40.192566 1849924 cri.go:89] found id: ""
	I1124 09:54:40.192581 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.192589 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:40.192594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:40.192677 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:40.233587 1849924 cri.go:89] found id: ""
	I1124 09:54:40.233616 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.233623 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:40.233628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:40.233702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:40.268152 1849924 cri.go:89] found id: ""
	I1124 09:54:40.268166 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.268173 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:40.268178 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:40.268258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:40.297572 1849924 cri.go:89] found id: ""
	I1124 09:54:40.297586 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.297593 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:40.297605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:40.297666 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:40.328480 1849924 cri.go:89] found id: ""
	I1124 09:54:40.328502 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.328511 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:40.328517 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:40.328583 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:40.354088 1849924 cri.go:89] found id: ""
	I1124 09:54:40.354102 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.354108 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:40.354114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:40.354172 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:40.384758 1849924 cri.go:89] found id: ""
	I1124 09:54:40.384772 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.384779 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:40.384786 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:40.384797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:40.452137 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:40.452157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:40.467741 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:40.467757 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:40.535224 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:40.535235 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:40.535246 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.615981 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:40.616005 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:43.148076 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:43.158106 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:43.158169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:43.182985 1849924 cri.go:89] found id: ""
	I1124 09:54:43.182999 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.183006 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:43.183012 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:43.183068 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:43.215806 1849924 cri.go:89] found id: ""
	I1124 09:54:43.215820 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.215837 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:43.215844 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:43.215903 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:43.244278 1849924 cri.go:89] found id: ""
	I1124 09:54:43.244301 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.244309 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:43.244314 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:43.244385 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:43.272908 1849924 cri.go:89] found id: ""
	I1124 09:54:43.272931 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.272938 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:43.272949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:43.273029 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:43.297907 1849924 cri.go:89] found id: ""
	I1124 09:54:43.297921 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.297927 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:43.297933 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:43.298008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:43.330376 1849924 cri.go:89] found id: ""
	I1124 09:54:43.330391 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.330397 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:43.330403 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:43.330459 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:43.359850 1849924 cri.go:89] found id: ""
	I1124 09:54:43.359864 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.359871 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:43.359879 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:43.359898 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:43.426992 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:43.427012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:43.441799 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:43.441816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:43.504072 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:43.504082 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:43.504093 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:43.585362 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:43.585390 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.114191 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:46.124223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:46.124285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:46.151013 1849924 cri.go:89] found id: ""
	I1124 09:54:46.151027 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.151034 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:46.151039 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:46.151096 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:46.177170 1849924 cri.go:89] found id: ""
	I1124 09:54:46.177184 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.177191 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:46.177196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:46.177258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:46.205800 1849924 cri.go:89] found id: ""
	I1124 09:54:46.205814 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.205822 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:46.205828 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:46.205893 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:46.239665 1849924 cri.go:89] found id: ""
	I1124 09:54:46.239689 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.239697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:46.239702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:46.239782 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:46.274455 1849924 cri.go:89] found id: ""
	I1124 09:54:46.274480 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.274488 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:46.274494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:46.274574 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:46.300659 1849924 cri.go:89] found id: ""
	I1124 09:54:46.300673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.300680 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:46.300686 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:46.300760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:46.326694 1849924 cri.go:89] found id: ""
	I1124 09:54:46.326708 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.326715 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:46.326723 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:46.326735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:46.389430 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:46.389441 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:46.389452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:46.467187 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:46.467207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.499873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:46.499889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:46.574600 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:46.574626 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.092671 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:49.102878 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:49.102942 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:49.130409 1849924 cri.go:89] found id: ""
	I1124 09:54:49.130431 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.130439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:49.130445 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:49.130508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:49.156861 1849924 cri.go:89] found id: ""
	I1124 09:54:49.156874 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.156891 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:49.156897 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:49.156964 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:49.183346 1849924 cri.go:89] found id: ""
	I1124 09:54:49.183369 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.183376 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:49.183382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:49.183442 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:49.217035 1849924 cri.go:89] found id: ""
	I1124 09:54:49.217049 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.217056 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:49.217062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:49.217146 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:49.245694 1849924 cri.go:89] found id: ""
	I1124 09:54:49.245713 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.245720 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:49.245726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:49.245891 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:49.284969 1849924 cri.go:89] found id: ""
	I1124 09:54:49.284983 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.284990 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:49.284995 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:49.285055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:49.314521 1849924 cri.go:89] found id: ""
	I1124 09:54:49.314535 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.314542 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:49.314549 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:49.314560 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:49.398958 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:49.398979 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:49.428494 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:49.428511 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:49.497701 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:49.497725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.513336 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:49.513352 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:49.581585 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.081862 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:52.092629 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:52.092692 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:52.124453 1849924 cri.go:89] found id: ""
	I1124 09:54:52.124475 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.124482 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:52.124488 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:52.124546 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:52.151758 1849924 cri.go:89] found id: ""
	I1124 09:54:52.151771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.151778 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:52.151784 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:52.151844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:52.176757 1849924 cri.go:89] found id: ""
	I1124 09:54:52.176771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.176778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:52.176783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:52.176846 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:52.201940 1849924 cri.go:89] found id: ""
	I1124 09:54:52.201954 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.201961 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:52.201967 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:52.202025 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:52.248612 1849924 cri.go:89] found id: ""
	I1124 09:54:52.248625 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.248632 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:52.248638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:52.248713 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:52.279382 1849924 cri.go:89] found id: ""
	I1124 09:54:52.279396 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.279404 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:52.279409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:52.279471 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:52.308695 1849924 cri.go:89] found id: ""
	I1124 09:54:52.308709 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.308717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:52.308724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:52.308735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:52.376027 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:52.376050 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:52.391327 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:52.391343 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:52.459367 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.459377 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:52.459389 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:52.535870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:52.535893 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:55.066284 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:55.077139 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:55.077203 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:55.105400 1849924 cri.go:89] found id: ""
	I1124 09:54:55.105498 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.105506 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:55.105512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:55.105620 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:55.136637 1849924 cri.go:89] found id: ""
	I1124 09:54:55.136651 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.136659 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:55.136664 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:55.136729 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:55.164659 1849924 cri.go:89] found id: ""
	I1124 09:54:55.164673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.164680 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:55.164685 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:55.164749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:55.190091 1849924 cri.go:89] found id: ""
	I1124 09:54:55.190117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.190124 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:55.190129 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:55.190191 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:55.224336 1849924 cri.go:89] found id: ""
	I1124 09:54:55.224351 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.224358 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:55.224363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:55.224424 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:55.259735 1849924 cri.go:89] found id: ""
	I1124 09:54:55.259748 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.259755 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:55.259761 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:55.259821 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:55.290052 1849924 cri.go:89] found id: ""
	I1124 09:54:55.290065 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.290072 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:55.290079 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:55.290090 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:55.355938 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:55.355957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:55.371501 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:55.371518 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:55.437126 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:55.437140 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:55.437152 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:55.515834 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:55.515854 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.048421 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:58.059495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:58.059560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:58.087204 1849924 cri.go:89] found id: ""
	I1124 09:54:58.087219 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.087226 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:58.087232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:58.087292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:58.118248 1849924 cri.go:89] found id: ""
	I1124 09:54:58.118262 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.118270 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:58.118276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:58.118336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:58.144878 1849924 cri.go:89] found id: ""
	I1124 09:54:58.144892 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.144899 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:58.144905 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:58.144963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:58.171781 1849924 cri.go:89] found id: ""
	I1124 09:54:58.171795 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.171814 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:58.171820 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:58.171898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:58.200885 1849924 cri.go:89] found id: ""
	I1124 09:54:58.200907 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.200915 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:58.200920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:58.200993 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:58.231674 1849924 cri.go:89] found id: ""
	I1124 09:54:58.231688 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.231695 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:58.231718 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:58.231792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:58.266664 1849924 cri.go:89] found id: ""
	I1124 09:54:58.266679 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.266686 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:58.266694 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:58.266705 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.300806 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:58.300822 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:58.367929 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:58.367949 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:58.383950 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:58.383967 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:58.449243 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:58.449254 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:58.449279 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:01.029569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:01.040150 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:01.040231 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:01.067942 1849924 cri.go:89] found id: ""
	I1124 09:55:01.067955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.067962 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:01.067968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:01.068031 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:01.095348 1849924 cri.go:89] found id: ""
	I1124 09:55:01.095362 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.095369 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:01.095375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:01.095436 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:01.125781 1849924 cri.go:89] found id: ""
	I1124 09:55:01.125795 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.125803 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:01.125808 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:01.125871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:01.153546 1849924 cri.go:89] found id: ""
	I1124 09:55:01.153561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.153568 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:01.153575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:01.153643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:01.183965 1849924 cri.go:89] found id: ""
	I1124 09:55:01.183980 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.183987 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:01.183993 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:01.184055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:01.218518 1849924 cri.go:89] found id: ""
	I1124 09:55:01.218533 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.218541 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:01.218548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:01.218628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:01.255226 1849924 cri.go:89] found id: ""
	I1124 09:55:01.255241 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.255248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:01.255255 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:01.255266 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:01.290705 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:01.290723 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:01.362275 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:01.362296 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:01.378338 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:01.378357 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:01.447338 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:01.447348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:01.447359 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.029431 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:04.039677 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:04.039753 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:04.064938 1849924 cri.go:89] found id: ""
	I1124 09:55:04.064952 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.064968 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:04.064975 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:04.065032 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:04.091065 1849924 cri.go:89] found id: ""
	I1124 09:55:04.091079 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.091087 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:04.091092 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:04.091155 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:04.119888 1849924 cri.go:89] found id: ""
	I1124 09:55:04.119902 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.119910 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:04.119915 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:04.119990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:04.145893 1849924 cri.go:89] found id: ""
	I1124 09:55:04.145907 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.145914 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:04.145920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:04.145981 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:04.172668 1849924 cri.go:89] found id: ""
	I1124 09:55:04.172682 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.172689 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:04.172695 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:04.172770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:04.199546 1849924 cri.go:89] found id: ""
	I1124 09:55:04.199559 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.199576 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:04.199582 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:04.199654 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:04.233837 1849924 cri.go:89] found id: ""
	I1124 09:55:04.233850 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.233857 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:04.233865 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:04.233875 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:04.312846 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:04.312868 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:04.328376 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:04.328393 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:04.392893 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:04.392903 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:04.392914 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.474469 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:04.474497 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.002775 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:07.014668 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:07.014734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:07.041533 1849924 cri.go:89] found id: ""
	I1124 09:55:07.041549 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.041556 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:07.041563 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:07.041628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:07.071414 1849924 cri.go:89] found id: ""
	I1124 09:55:07.071429 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.071436 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:07.071442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:07.071500 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:07.102622 1849924 cri.go:89] found id: ""
	I1124 09:55:07.102637 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.102644 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:07.102650 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:07.102708 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:07.127684 1849924 cri.go:89] found id: ""
	I1124 09:55:07.127713 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.127720 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:07.127726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:07.127792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:07.153696 1849924 cri.go:89] found id: ""
	I1124 09:55:07.153710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.153718 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:07.153724 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:07.153785 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:07.186158 1849924 cri.go:89] found id: ""
	I1124 09:55:07.186180 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.186187 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:07.186193 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:07.186252 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:07.217520 1849924 cri.go:89] found id: ""
	I1124 09:55:07.217554 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.217562 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:07.217570 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:07.217580 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.247265 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:07.247288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:07.320517 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:07.320537 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:07.336358 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:07.336373 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:07.403281 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:07.403292 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:07.403302 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:09.981463 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:09.992128 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:09.992195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:10.021174 1849924 cri.go:89] found id: ""
	I1124 09:55:10.021189 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.021197 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:10.021203 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:10.021267 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:10.049180 1849924 cri.go:89] found id: ""
	I1124 09:55:10.049194 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.049202 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:10.049207 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:10.049270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:10.078645 1849924 cri.go:89] found id: ""
	I1124 09:55:10.078660 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.078667 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:10.078673 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:10.078734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:10.106290 1849924 cri.go:89] found id: ""
	I1124 09:55:10.106304 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.106312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:10.106318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:10.106390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:10.133401 1849924 cri.go:89] found id: ""
	I1124 09:55:10.133455 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.133462 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:10.133468 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:10.133544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:10.162805 1849924 cri.go:89] found id: ""
	I1124 09:55:10.162820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.162827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:10.162833 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:10.162890 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:10.189156 1849924 cri.go:89] found id: ""
	I1124 09:55:10.189170 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.189177 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:10.189185 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:10.189206 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:10.280238 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:10.280247 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:10.280258 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:10.359007 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:10.359031 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:10.395999 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:10.396024 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:10.462661 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:10.462683 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:12.979323 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:12.989228 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:12.989300 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:13.016908 1849924 cri.go:89] found id: ""
	I1124 09:55:13.016922 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.016929 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:13.016935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:13.016998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:13.044445 1849924 cri.go:89] found id: ""
	I1124 09:55:13.044467 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.044474 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:13.044480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:13.044547 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:13.070357 1849924 cri.go:89] found id: ""
	I1124 09:55:13.070379 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.070387 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:13.070392 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:13.070461 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:13.098253 1849924 cri.go:89] found id: ""
	I1124 09:55:13.098267 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.098274 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:13.098280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:13.098339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:13.124183 1849924 cri.go:89] found id: ""
	I1124 09:55:13.124196 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.124203 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:13.124209 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:13.124269 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:13.150521 1849924 cri.go:89] found id: ""
	I1124 09:55:13.150536 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.150543 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:13.150549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:13.150619 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:13.181696 1849924 cri.go:89] found id: ""
	I1124 09:55:13.181710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.181717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:13.181724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:13.181735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:13.250758 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:13.250778 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:13.271249 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:13.271264 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:13.332213 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:13.332223 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:13.332235 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:13.409269 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:13.409293 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:15.940893 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:15.951127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:15.951201 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:15.976744 1849924 cri.go:89] found id: ""
	I1124 09:55:15.976767 1849924 logs.go:282] 0 containers: []
	W1124 09:55:15.976774 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:15.976780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:15.976848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:16.005218 1849924 cri.go:89] found id: ""
	I1124 09:55:16.005235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.005245 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:16.005251 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:16.005336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:16.036862 1849924 cri.go:89] found id: ""
	I1124 09:55:16.036888 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.036896 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:16.036902 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:16.036990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:16.063354 1849924 cri.go:89] found id: ""
	I1124 09:55:16.063369 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.063376 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:16.063382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:16.063455 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:16.092197 1849924 cri.go:89] found id: ""
	I1124 09:55:16.092211 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.092218 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:16.092224 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:16.092286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:16.117617 1849924 cri.go:89] found id: ""
	I1124 09:55:16.117631 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.117639 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:16.117644 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:16.117702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:16.143200 1849924 cri.go:89] found id: ""
	I1124 09:55:16.143214 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.143220 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:16.143228 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:16.143239 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:16.171873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:16.171889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:16.247500 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:16.247519 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:16.267064 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:16.267080 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:16.337347 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:16.337357 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:16.337368 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:18.916700 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:18.927603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:18.927697 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:18.958633 1849924 cri.go:89] found id: ""
	I1124 09:55:18.958649 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.958656 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:18.958662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:18.958725 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:18.988567 1849924 cri.go:89] found id: ""
	I1124 09:55:18.988582 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.988589 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:18.988594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:18.988665 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:19.016972 1849924 cri.go:89] found id: ""
	I1124 09:55:19.016986 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.016993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:19.016999 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:19.017058 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:19.042806 1849924 cri.go:89] found id: ""
	I1124 09:55:19.042827 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.042835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:19.042841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:19.042905 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:19.073274 1849924 cri.go:89] found id: ""
	I1124 09:55:19.073288 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.073296 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:19.073301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:19.073368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:19.099687 1849924 cri.go:89] found id: ""
	I1124 09:55:19.099701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.099708 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:19.099714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:19.099780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:19.126512 1849924 cri.go:89] found id: ""
	I1124 09:55:19.126526 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.126532 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:19.126540 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:19.126550 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:19.194410 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:19.194430 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:19.216505 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:19.216527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:19.291566 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:19.291578 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:19.291591 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:19.371192 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:19.371213 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:21.902356 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:21.912405 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:21.912468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:21.937243 1849924 cri.go:89] found id: ""
	I1124 09:55:21.937256 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.937270 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:21.937276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:21.937335 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:21.963054 1849924 cri.go:89] found id: ""
	I1124 09:55:21.963068 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.963075 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:21.963080 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:21.963136 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:21.988695 1849924 cri.go:89] found id: ""
	I1124 09:55:21.988708 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.988715 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:21.988722 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:21.988780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:22.015029 1849924 cri.go:89] found id: ""
	I1124 09:55:22.015043 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.015050 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:22.015056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:22.015117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:22.044828 1849924 cri.go:89] found id: ""
	I1124 09:55:22.044843 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.044851 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:22.044857 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:22.044919 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:22.071875 1849924 cri.go:89] found id: ""
	I1124 09:55:22.071889 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.071897 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:22.071903 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:22.071970 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:22.099237 1849924 cri.go:89] found id: ""
	I1124 09:55:22.099252 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.099259 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:22.099267 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:22.099278 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:22.170156 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:22.170176 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:22.185271 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:22.185288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:22.271963 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:22.271973 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:22.271984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:22.349426 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:22.349447 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:24.878185 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:24.888725 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:24.888800 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:24.915846 1849924 cri.go:89] found id: ""
	I1124 09:55:24.915860 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.915867 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:24.915872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:24.915931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:24.944104 1849924 cri.go:89] found id: ""
	I1124 09:55:24.944118 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.944125 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:24.944131 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:24.944196 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:24.970424 1849924 cri.go:89] found id: ""
	I1124 09:55:24.970438 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.970445 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:24.970450 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:24.970511 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:24.999941 1849924 cri.go:89] found id: ""
	I1124 09:55:24.999955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.999962 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:24.999968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:25.000027 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:25.030682 1849924 cri.go:89] found id: ""
	I1124 09:55:25.030700 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.030707 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:25.030714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:25.030788 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:25.061169 1849924 cri.go:89] found id: ""
	I1124 09:55:25.061183 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.061191 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:25.061196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:25.061262 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:25.092046 1849924 cri.go:89] found id: ""
	I1124 09:55:25.092061 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.092069 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:25.092078 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:25.092089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:25.164204 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:25.164229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:25.180461 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:25.180477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:25.270104 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:25.270114 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:25.270125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:25.349962 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:25.349985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:27.885869 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:27.895923 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:27.895990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:27.923576 1849924 cri.go:89] found id: ""
	I1124 09:55:27.923591 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.923598 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:27.923604 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:27.923660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:27.949384 1849924 cri.go:89] found id: ""
	I1124 09:55:27.949398 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.949405 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:27.949409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:27.949468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:27.974662 1849924 cri.go:89] found id: ""
	I1124 09:55:27.974675 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.974682 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:27.974687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:27.974752 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:28.000014 1849924 cri.go:89] found id: ""
	I1124 09:55:28.000028 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.000035 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:28.000041 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:28.000113 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:28.031383 1849924 cri.go:89] found id: ""
	I1124 09:55:28.031397 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.031404 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:28.031410 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:28.031468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:28.062851 1849924 cri.go:89] found id: ""
	I1124 09:55:28.062872 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.062880 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:28.062886 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:28.062965 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:28.091592 1849924 cri.go:89] found id: ""
	I1124 09:55:28.091608 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.091623 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:28.091633 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:28.091646 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:28.125018 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:28.125035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:28.190729 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:28.190751 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:28.205665 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:28.205681 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:28.285905 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:28.285917 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:28.285927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:30.864245 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:30.876164 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:30.876248 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:30.901572 1849924 cri.go:89] found id: ""
	I1124 09:55:30.901586 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.901593 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:30.901599 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:30.901659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:30.931361 1849924 cri.go:89] found id: ""
	I1124 09:55:30.931374 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.931382 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:30.931388 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:30.931449 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:30.956087 1849924 cri.go:89] found id: ""
	I1124 09:55:30.956101 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.956108 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:30.956114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:30.956174 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:30.981912 1849924 cri.go:89] found id: ""
	I1124 09:55:30.981925 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.981933 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:30.981938 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:30.982013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:31.010764 1849924 cri.go:89] found id: ""
	I1124 09:55:31.010778 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.010804 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:31.010811 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:31.010884 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:31.037094 1849924 cri.go:89] found id: ""
	I1124 09:55:31.037140 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.037146 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:31.037153 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:31.037221 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:31.064060 1849924 cri.go:89] found id: ""
	I1124 09:55:31.064075 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.064092 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:31.064100 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:31.064111 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:31.129432 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:31.129444 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:31.129455 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:31.207603 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:31.207622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:31.246019 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:31.246035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:31.313859 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:31.313882 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:33.829785 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:33.839749 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:33.839813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:33.864810 1849924 cri.go:89] found id: ""
	I1124 09:55:33.864824 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.864831 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:33.864837 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:33.864898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:33.890309 1849924 cri.go:89] found id: ""
	I1124 09:55:33.890324 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.890331 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:33.890336 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:33.890401 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:33.922386 1849924 cri.go:89] found id: ""
	I1124 09:55:33.922399 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.922406 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:33.922412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:33.922473 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:33.947199 1849924 cri.go:89] found id: ""
	I1124 09:55:33.947213 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.947220 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:33.947226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:33.947289 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:33.972195 1849924 cri.go:89] found id: ""
	I1124 09:55:33.972209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.972216 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:33.972222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:33.972294 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:33.997877 1849924 cri.go:89] found id: ""
	I1124 09:55:33.997891 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.997898 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:33.997904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:33.997961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:34.024719 1849924 cri.go:89] found id: ""
	I1124 09:55:34.024733 1849924 logs.go:282] 0 containers: []
	W1124 09:55:34.024741 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:34.024748 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:34.024769 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:34.089874 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:34.089896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:34.104839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:34.104857 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:34.171681 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:34.171691 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:34.171702 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:34.249876 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:34.249896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:36.781512 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:36.791518 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:36.791579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:36.820485 1849924 cri.go:89] found id: ""
	I1124 09:55:36.820500 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.820508 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:36.820514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:36.820589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:36.845963 1849924 cri.go:89] found id: ""
	I1124 09:55:36.845978 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.845985 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:36.845991 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:36.846062 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:36.880558 1849924 cri.go:89] found id: ""
	I1124 09:55:36.880573 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.880580 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:36.880586 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:36.880656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:36.908730 1849924 cri.go:89] found id: ""
	I1124 09:55:36.908745 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.908752 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:36.908769 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:36.908830 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:36.936618 1849924 cri.go:89] found id: ""
	I1124 09:55:36.936634 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.936646 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:36.936662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:36.936724 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:36.961091 1849924 cri.go:89] found id: ""
	I1124 09:55:36.961134 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.961142 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:36.961148 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:36.961215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:36.986263 1849924 cri.go:89] found id: ""
	I1124 09:55:36.986278 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.986285 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:36.986293 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:36.986304 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:37.061090 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:37.061120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:37.076634 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:37.076652 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:37.144407 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:37.144417 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:37.144427 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:37.223887 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:37.223907 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:39.759307 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:39.769265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:39.769325 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:39.795092 1849924 cri.go:89] found id: ""
	I1124 09:55:39.795107 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.795114 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:39.795120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:39.795180 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:39.821381 1849924 cri.go:89] found id: ""
	I1124 09:55:39.821396 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.821403 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:39.821408 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:39.821480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:39.850195 1849924 cri.go:89] found id: ""
	I1124 09:55:39.850209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.850224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:39.850232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:39.850291 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:39.875376 1849924 cri.go:89] found id: ""
	I1124 09:55:39.875391 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.875398 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:39.875404 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:39.875466 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:39.904124 1849924 cri.go:89] found id: ""
	I1124 09:55:39.904138 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.904146 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:39.904151 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:39.904222 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:39.930807 1849924 cri.go:89] found id: ""
	I1124 09:55:39.930820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.930827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:39.930832 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:39.930889 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:39.960435 1849924 cri.go:89] found id: ""
	I1124 09:55:39.960449 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.960456 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:39.960464 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:39.960475 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:40.030261 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:40.030271 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:40.030283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:40.109590 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:40.109615 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:40.143688 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:40.143704 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:40.212394 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:40.212412 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:42.734304 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:42.744432 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:42.744494 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:42.769686 1849924 cri.go:89] found id: ""
	I1124 09:55:42.769701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.769708 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:42.769714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:42.769774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:42.794368 1849924 cri.go:89] found id: ""
	I1124 09:55:42.794381 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.794388 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:42.794394 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:42.794460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:42.819036 1849924 cri.go:89] found id: ""
	I1124 09:55:42.819051 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.819058 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:42.819067 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:42.819126 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:42.845429 1849924 cri.go:89] found id: ""
	I1124 09:55:42.845444 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.845452 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:42.845457 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:42.845516 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:42.873391 1849924 cri.go:89] found id: ""
	I1124 09:55:42.873405 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.873412 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:42.873418 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:42.873483 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:42.899532 1849924 cri.go:89] found id: ""
	I1124 09:55:42.899560 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.899567 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:42.899575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:42.899642 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:42.925159 1849924 cri.go:89] found id: ""
	I1124 09:55:42.925173 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.925180 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:42.925188 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:42.925215 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:43.003079 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:43.003104 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:43.041964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:43.041990 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:43.120202 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:43.120224 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:43.143097 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:43.143191 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:43.219616 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:45.719895 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:45.730306 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:45.730370 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:45.755318 1849924 cri.go:89] found id: ""
	I1124 09:55:45.755333 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.755341 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:45.755353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:45.755413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:45.781283 1849924 cri.go:89] found id: ""
	I1124 09:55:45.781299 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.781305 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:45.781311 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:45.781369 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:45.807468 1849924 cri.go:89] found id: ""
	I1124 09:55:45.807482 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.807489 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:45.807495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:45.807554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:45.836726 1849924 cri.go:89] found id: ""
	I1124 09:55:45.836741 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.836749 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:45.836754 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:45.836813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:45.862613 1849924 cri.go:89] found id: ""
	I1124 09:55:45.862628 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.862635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:45.862641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:45.862702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:45.894972 1849924 cri.go:89] found id: ""
	I1124 09:55:45.894987 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.894994 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:45.895000 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:45.895067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:45.922194 1849924 cri.go:89] found id: ""
	I1124 09:55:45.922209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.922217 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:45.922224 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:45.922237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:45.954912 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:45.954930 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:46.021984 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:46.022004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:46.037849 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:46.037865 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:46.101460 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:46.101473 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:46.101483 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:48.688081 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:48.698194 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:48.698260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:48.724390 1849924 cri.go:89] found id: ""
	I1124 09:55:48.724404 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.724411 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:48.724416 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:48.724480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:48.749323 1849924 cri.go:89] found id: ""
	I1124 09:55:48.749337 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.749344 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:48.749350 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:48.749406 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:48.774542 1849924 cri.go:89] found id: ""
	I1124 09:55:48.774555 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.774562 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:48.774569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:48.774635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:48.799553 1849924 cri.go:89] found id: ""
	I1124 09:55:48.799568 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.799575 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:48.799580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:48.799637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:48.824768 1849924 cri.go:89] found id: ""
	I1124 09:55:48.824782 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.824789 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:48.824794 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:48.824849 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:48.853654 1849924 cri.go:89] found id: ""
	I1124 09:55:48.853668 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.853674 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:48.853680 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:48.853738 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:48.880137 1849924 cri.go:89] found id: ""
	I1124 09:55:48.880151 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.880158 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:48.880166 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:48.880178 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:48.943985 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:48.943998 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:48.944008 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:49.021387 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:49.021407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:49.054551 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:49.054566 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:49.124670 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:49.124690 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.640001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:51.650264 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:51.650326 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:51.675421 1849924 cri.go:89] found id: ""
	I1124 09:55:51.675434 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.675442 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:51.675447 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:51.675510 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:51.703552 1849924 cri.go:89] found id: ""
	I1124 09:55:51.703566 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.703573 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:51.703578 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:51.703637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:51.731457 1849924 cri.go:89] found id: ""
	I1124 09:55:51.731470 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.731477 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:51.731483 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:51.731540 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:51.757515 1849924 cri.go:89] found id: ""
	I1124 09:55:51.757529 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.757536 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:51.757541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:51.757604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:51.787493 1849924 cri.go:89] found id: ""
	I1124 09:55:51.787507 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.787514 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:51.787520 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:51.787579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:51.813153 1849924 cri.go:89] found id: ""
	I1124 09:55:51.813166 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.813173 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:51.813179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:51.813250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:51.845222 1849924 cri.go:89] found id: ""
	I1124 09:55:51.845235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.845244 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:51.845252 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:51.845272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.860214 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:51.860236 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:51.924176 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:51.924186 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:51.924196 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:52.001608 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:52.001629 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:52.037448 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:52.037466 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.609480 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:54.620161 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:54.620223 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:54.649789 1849924 cri.go:89] found id: ""
	I1124 09:55:54.649803 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.649810 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:54.649816 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:54.649879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:54.677548 1849924 cri.go:89] found id: ""
	I1124 09:55:54.677561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.677568 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:54.677573 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:54.677635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:54.707602 1849924 cri.go:89] found id: ""
	I1124 09:55:54.707616 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.707623 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:54.707628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:54.707687 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:54.737369 1849924 cri.go:89] found id: ""
	I1124 09:55:54.737382 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.737390 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:54.737396 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:54.737460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:54.764514 1849924 cri.go:89] found id: ""
	I1124 09:55:54.764528 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.764536 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:54.764541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:54.764599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:54.789898 1849924 cri.go:89] found id: ""
	I1124 09:55:54.789912 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.789920 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:54.789925 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:54.789986 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:54.815652 1849924 cri.go:89] found id: ""
	I1124 09:55:54.815665 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.815672 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:54.815681 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:54.815691 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.882879 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:54.882901 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:54.898593 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:54.898622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:54.967134 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:54.967146 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:54.967157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:55.046870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:55.046891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.578091 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:57.588580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:57.588643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:57.617411 1849924 cri.go:89] found id: ""
	I1124 09:55:57.617425 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.617432 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:57.617437 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:57.617503 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:57.642763 1849924 cri.go:89] found id: ""
	I1124 09:55:57.642777 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.642784 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:57.642789 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:57.642848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:57.668484 1849924 cri.go:89] found id: ""
	I1124 09:55:57.668499 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.668506 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:57.668512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:57.668571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:57.694643 1849924 cri.go:89] found id: ""
	I1124 09:55:57.694657 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.694664 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:57.694670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:57.694730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:57.720049 1849924 cri.go:89] found id: ""
	I1124 09:55:57.720063 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.720070 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:57.720075 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:57.720140 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:57.748016 1849924 cri.go:89] found id: ""
	I1124 09:55:57.748029 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.748036 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:57.748044 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:57.748104 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:57.774253 1849924 cri.go:89] found id: ""
	I1124 09:55:57.774266 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.774273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:57.774281 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:57.774295 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:57.789236 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:57.789253 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:57.851207 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:57.851217 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:57.851229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:57.927927 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:57.927946 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.959058 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:57.959075 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.529440 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:00.539970 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:00.540034 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:00.566556 1849924 cri.go:89] found id: ""
	I1124 09:56:00.566570 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.566583 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:00.566589 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:00.566659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:00.596278 1849924 cri.go:89] found id: ""
	I1124 09:56:00.596291 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.596298 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:00.596304 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:00.596362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:00.623580 1849924 cri.go:89] found id: ""
	I1124 09:56:00.623593 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.623600 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:00.623605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:00.623664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:00.648991 1849924 cri.go:89] found id: ""
	I1124 09:56:00.649006 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.649012 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:00.649018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:00.649078 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:00.676614 1849924 cri.go:89] found id: ""
	I1124 09:56:00.676628 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.676635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:00.676641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:00.676706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:00.701480 1849924 cri.go:89] found id: ""
	I1124 09:56:00.701502 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.701509 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:00.701516 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:00.701575 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:00.727550 1849924 cri.go:89] found id: ""
	I1124 09:56:00.727563 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.727570 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:00.727578 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:00.727589 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:00.755964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:00.755980 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.822018 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:00.822039 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:00.837252 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:00.837268 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:00.901931 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:00.901942 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:00.901957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.481859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:03.493893 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:03.493961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:03.522628 1849924 cri.go:89] found id: ""
	I1124 09:56:03.522643 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.522650 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:03.522656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:03.522716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:03.551454 1849924 cri.go:89] found id: ""
	I1124 09:56:03.551468 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.551475 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:03.551480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:03.551539 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:03.580931 1849924 cri.go:89] found id: ""
	I1124 09:56:03.580945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.580951 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:03.580957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:03.581015 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:03.607826 1849924 cri.go:89] found id: ""
	I1124 09:56:03.607840 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.607846 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:03.607852 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:03.607923 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:03.637843 1849924 cri.go:89] found id: ""
	I1124 09:56:03.637857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.637865 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:03.637870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:03.637931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:03.665156 1849924 cri.go:89] found id: ""
	I1124 09:56:03.665170 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.665176 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:03.665182 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:03.665250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:03.690810 1849924 cri.go:89] found id: ""
	I1124 09:56:03.690824 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.690831 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:03.690839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:03.690849 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:03.755803 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:03.755813 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:03.755823 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.832793 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:03.832816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:03.860351 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:03.860367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:03.930446 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:03.930465 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.445925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:06.457385 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:06.457451 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:06.490931 1849924 cri.go:89] found id: ""
	I1124 09:56:06.490944 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.490951 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:06.490956 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:06.491013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:06.529326 1849924 cri.go:89] found id: ""
	I1124 09:56:06.529340 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.529347 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:06.529353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:06.529409 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:06.554888 1849924 cri.go:89] found id: ""
	I1124 09:56:06.554914 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.554921 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:06.554926 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:06.554984 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:06.579750 1849924 cri.go:89] found id: ""
	I1124 09:56:06.579764 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.579771 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:06.579781 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:06.579839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:06.605075 1849924 cri.go:89] found id: ""
	I1124 09:56:06.605098 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.605134 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:06.605140 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:06.605207 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:06.630281 1849924 cri.go:89] found id: ""
	I1124 09:56:06.630295 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.630302 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:06.630307 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:06.630366 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:06.655406 1849924 cri.go:89] found id: ""
	I1124 09:56:06.655427 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.655435 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:06.655442 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:06.655453 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:06.722316 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:06.722335 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.737174 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:06.737190 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:06.801018 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:06.801032 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:06.801042 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:06.882225 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:06.882254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.412996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:09.423266 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:09.423332 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:09.452270 1849924 cri.go:89] found id: ""
	I1124 09:56:09.452283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.452290 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:09.452295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:09.452353 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:09.484931 1849924 cri.go:89] found id: ""
	I1124 09:56:09.484945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.484952 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:09.484957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:09.485030 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:09.526676 1849924 cri.go:89] found id: ""
	I1124 09:56:09.526689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.526696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:09.526701 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:09.526758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:09.551815 1849924 cri.go:89] found id: ""
	I1124 09:56:09.551828 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.551835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:09.551841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:09.551904 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:09.580143 1849924 cri.go:89] found id: ""
	I1124 09:56:09.580159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.580167 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:09.580173 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:09.580233 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:09.608255 1849924 cri.go:89] found id: ""
	I1124 09:56:09.608269 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.608276 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:09.608281 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:09.608338 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:09.638262 1849924 cri.go:89] found id: ""
	I1124 09:56:09.638276 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.638283 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:09.638291 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:09.638301 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:09.713707 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:09.713728 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.741202 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:09.741218 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:09.806578 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:09.806598 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:09.821839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:09.821855 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:09.888815 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.390494 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:12.400491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:12.400550 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:12.426496 1849924 cri.go:89] found id: ""
	I1124 09:56:12.426511 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.426517 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:12.426524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:12.426587 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:12.457770 1849924 cri.go:89] found id: ""
	I1124 09:56:12.457794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.457801 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:12.457807 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:12.457873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:12.489154 1849924 cri.go:89] found id: ""
	I1124 09:56:12.489167 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.489174 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:12.489179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:12.489250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:12.524997 1849924 cri.go:89] found id: ""
	I1124 09:56:12.525010 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.525018 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:12.525024 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:12.525090 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:12.550538 1849924 cri.go:89] found id: ""
	I1124 09:56:12.550561 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.550569 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:12.550574 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:12.550650 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:12.575990 1849924 cri.go:89] found id: ""
	I1124 09:56:12.576011 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.576018 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:12.576025 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:12.576095 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:12.602083 1849924 cri.go:89] found id: ""
	I1124 09:56:12.602097 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.602104 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:12.602112 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:12.602125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:12.667794 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:12.667814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:12.682815 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:12.682832 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:12.749256 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.749266 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:12.749276 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:12.823882 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:12.823902 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.353890 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:15.364319 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:15.364380 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:15.389759 1849924 cri.go:89] found id: ""
	I1124 09:56:15.389772 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.389786 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:15.389792 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:15.389850 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:15.414921 1849924 cri.go:89] found id: ""
	I1124 09:56:15.414936 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.414943 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:15.414948 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:15.415008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:15.444228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.444242 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.444249 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:15.444254 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:15.444314 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:15.476734 1849924 cri.go:89] found id: ""
	I1124 09:56:15.476747 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.476763 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:15.476768 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:15.476836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:15.507241 1849924 cri.go:89] found id: ""
	I1124 09:56:15.507254 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.507261 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:15.507275 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:15.507339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:15.544058 1849924 cri.go:89] found id: ""
	I1124 09:56:15.544081 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.544089 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:15.544094 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:15.544162 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:15.571228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.571241 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.571248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:15.571261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:15.571272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:15.646647 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:15.646667 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.674311 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:15.674326 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:15.739431 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:15.739451 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:15.754640 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:15.754662 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:15.821471 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.321745 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:18.331603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:18.331664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:18.357195 1849924 cri.go:89] found id: ""
	I1124 09:56:18.357215 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.357223 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:18.357229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:18.357292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:18.387513 1849924 cri.go:89] found id: ""
	I1124 09:56:18.387527 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.387534 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:18.387540 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:18.387600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:18.414561 1849924 cri.go:89] found id: ""
	I1124 09:56:18.414583 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.414590 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:18.414596 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:18.414670 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:18.441543 1849924 cri.go:89] found id: ""
	I1124 09:56:18.441557 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.441564 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:18.441569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:18.441627 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:18.481911 1849924 cri.go:89] found id: ""
	I1124 09:56:18.481924 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.481931 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:18.481937 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:18.481995 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:18.512577 1849924 cri.go:89] found id: ""
	I1124 09:56:18.512589 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.512596 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:18.512601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:18.512660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:18.542006 1849924 cri.go:89] found id: ""
	I1124 09:56:18.542021 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.542028 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:18.542035 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:18.542045 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:18.572217 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:18.572233 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:18.637845 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:18.637863 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:18.653892 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:18.653908 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:18.720870 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.720881 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:18.720891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.300479 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:21.310612 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:21.310716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:21.339787 1849924 cri.go:89] found id: ""
	I1124 09:56:21.339801 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.339808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:21.339819 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:21.339879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:21.364577 1849924 cri.go:89] found id: ""
	I1124 09:56:21.364601 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.364609 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:21.364615 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:21.364688 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:21.391798 1849924 cri.go:89] found id: ""
	I1124 09:56:21.391852 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.391859 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:21.391865 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:21.391939 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:21.417518 1849924 cri.go:89] found id: ""
	I1124 09:56:21.417532 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.417539 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:21.417545 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:21.417600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:21.443079 1849924 cri.go:89] found id: ""
	I1124 09:56:21.443092 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.443099 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:21.443104 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:21.443164 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:21.483649 1849924 cri.go:89] found id: ""
	I1124 09:56:21.483663 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.483685 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:21.483691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:21.483758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:21.513352 1849924 cri.go:89] found id: ""
	I1124 09:56:21.513367 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.513374 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:21.513383 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:21.513445 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:21.583074 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:21.583095 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:21.598415 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:21.598432 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:21.661326 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:21.661336 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:21.661348 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.742506 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:21.742527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:24.271763 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:24.281983 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:24.282044 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:24.313907 1849924 cri.go:89] found id: ""
	I1124 09:56:24.313920 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.313928 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:24.313934 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:24.314006 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:24.338982 1849924 cri.go:89] found id: ""
	I1124 09:56:24.338996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.339003 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:24.339009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:24.339067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:24.365195 1849924 cri.go:89] found id: ""
	I1124 09:56:24.365209 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.365216 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:24.365222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:24.365292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:24.390215 1849924 cri.go:89] found id: ""
	I1124 09:56:24.390228 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.390235 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:24.390241 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:24.390299 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:24.415458 1849924 cri.go:89] found id: ""
	I1124 09:56:24.415472 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.415479 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:24.415484 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:24.415544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:24.442483 1849924 cri.go:89] found id: ""
	I1124 09:56:24.442497 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.442504 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:24.442510 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:24.442571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:24.478898 1849924 cri.go:89] found id: ""
	I1124 09:56:24.478912 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.478919 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:24.478926 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:24.478936 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:24.559295 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:24.559320 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:24.575521 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:24.575538 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:24.643962 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:24.643974 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:24.643985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:24.721863 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:24.721883 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.252684 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:27.262544 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:27.262604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:27.288190 1849924 cri.go:89] found id: ""
	I1124 09:56:27.288203 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.288211 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:27.288216 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:27.288276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:27.315955 1849924 cri.go:89] found id: ""
	I1124 09:56:27.315975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.315983 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:27.315988 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:27.316050 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:27.341613 1849924 cri.go:89] found id: ""
	I1124 09:56:27.341626 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.341633 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:27.341639 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:27.341699 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:27.366677 1849924 cri.go:89] found id: ""
	I1124 09:56:27.366690 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.366697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:27.366703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:27.366768 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:27.392001 1849924 cri.go:89] found id: ""
	I1124 09:56:27.392015 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.392021 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:27.392027 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:27.392085 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:27.419410 1849924 cri.go:89] found id: ""
	I1124 09:56:27.419430 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.419436 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:27.419442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:27.419501 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:27.444780 1849924 cri.go:89] found id: ""
	I1124 09:56:27.444794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.444801 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:27.444809 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:27.444824 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.478836 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:27.478853 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:27.552795 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:27.552814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:27.567935 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:27.567988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:27.630838 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:27.630849 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:27.630859 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:30.212620 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:30.223248 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:30.223313 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:30.249863 1849924 cri.go:89] found id: ""
	I1124 09:56:30.249876 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.249883 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:30.249888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:30.249947 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:30.275941 1849924 cri.go:89] found id: ""
	I1124 09:56:30.275955 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.275974 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:30.275980 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:30.276053 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:30.300914 1849924 cri.go:89] found id: ""
	I1124 09:56:30.300928 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.300944 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:30.300950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:30.301016 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:30.325980 1849924 cri.go:89] found id: ""
	I1124 09:56:30.325994 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.326011 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:30.326018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:30.326089 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:30.352023 1849924 cri.go:89] found id: ""
	I1124 09:56:30.352038 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.352045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:30.352050 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:30.352121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:30.379711 1849924 cri.go:89] found id: ""
	I1124 09:56:30.379724 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.379731 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:30.379736 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:30.379801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:30.409210 1849924 cri.go:89] found id: ""
	I1124 09:56:30.409224 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.409232 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:30.409240 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:30.409251 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:30.437995 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:30.438012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:30.507429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:30.507448 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:30.525911 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:30.525927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:30.589196 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:30.589210 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:30.589220 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:33.172621 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:33.182671 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:33.182730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:33.211695 1849924 cri.go:89] found id: ""
	I1124 09:56:33.211709 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.211716 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:33.211721 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:33.211779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:33.237798 1849924 cri.go:89] found id: ""
	I1124 09:56:33.237811 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.237818 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:33.237824 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:33.237885 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:33.262147 1849924 cri.go:89] found id: ""
	I1124 09:56:33.262160 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.262167 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:33.262172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:33.262230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:33.286667 1849924 cri.go:89] found id: ""
	I1124 09:56:33.286681 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.286690 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:33.286696 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:33.286754 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:33.311109 1849924 cri.go:89] found id: ""
	I1124 09:56:33.311122 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.311129 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:33.311135 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:33.311198 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:33.336757 1849924 cri.go:89] found id: ""
	I1124 09:56:33.336781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.336790 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:33.336796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:33.336864 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:33.365159 1849924 cri.go:89] found id: ""
	I1124 09:56:33.365172 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.365179 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:33.365186 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:33.365197 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:33.393002 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:33.393017 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:33.457704 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:33.457724 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:33.473674 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:33.473700 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:33.547251 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:33.547261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:33.547274 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.125180 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:36.135549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:36.135611 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:36.161892 1849924 cri.go:89] found id: ""
	I1124 09:56:36.161906 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.161913 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:36.161919 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:36.161980 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:36.192254 1849924 cri.go:89] found id: ""
	I1124 09:56:36.192268 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.192275 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:36.192280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:36.192341 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:36.219675 1849924 cri.go:89] found id: ""
	I1124 09:56:36.219689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.219696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:36.219702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:36.219760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:36.249674 1849924 cri.go:89] found id: ""
	I1124 09:56:36.249688 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.249695 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:36.249700 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:36.249756 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:36.276115 1849924 cri.go:89] found id: ""
	I1124 09:56:36.276129 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.276136 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:36.276141 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:36.276199 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:36.303472 1849924 cri.go:89] found id: ""
	I1124 09:56:36.303486 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.303494 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:36.303499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:36.303558 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:36.332774 1849924 cri.go:89] found id: ""
	I1124 09:56:36.332789 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.332796 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:36.332804 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:36.332814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.410262 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:36.410282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:36.442608 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:36.442625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:36.517228 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:36.517247 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:36.532442 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:36.532459 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:36.598941 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.099623 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:39.110286 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:39.110347 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:39.135094 1849924 cri.go:89] found id: ""
	I1124 09:56:39.135108 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.135115 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:39.135120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:39.135184 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:39.161664 1849924 cri.go:89] found id: ""
	I1124 09:56:39.161678 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.161685 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:39.161691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:39.161749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:39.186843 1849924 cri.go:89] found id: ""
	I1124 09:56:39.186857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.186865 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:39.186870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:39.186930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:39.212864 1849924 cri.go:89] found id: ""
	I1124 09:56:39.212878 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.212889 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:39.212895 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:39.212953 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:39.243329 1849924 cri.go:89] found id: ""
	I1124 09:56:39.243343 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.243350 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:39.243356 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:39.243421 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:39.268862 1849924 cri.go:89] found id: ""
	I1124 09:56:39.268875 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.268883 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:39.268888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:39.268950 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:39.295966 1849924 cri.go:89] found id: ""
	I1124 09:56:39.295979 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.295986 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:39.295993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:39.296004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:39.327310 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:39.327325 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:39.392831 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:39.392850 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:39.407904 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:39.407920 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:39.476692 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.476716 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:39.476729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.055953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:42.067687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:42.067767 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:42.096948 1849924 cri.go:89] found id: ""
	I1124 09:56:42.096963 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.096971 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:42.096977 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:42.097039 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:42.128766 1849924 cri.go:89] found id: ""
	I1124 09:56:42.128781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.128789 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:42.128795 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:42.128861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:42.160266 1849924 cri.go:89] found id: ""
	I1124 09:56:42.160283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.160291 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:42.160297 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:42.160368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:42.191973 1849924 cri.go:89] found id: ""
	I1124 09:56:42.191996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.192004 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:42.192011 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:42.192081 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:42.226204 1849924 cri.go:89] found id: ""
	I1124 09:56:42.226218 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.226226 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:42.226232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:42.226316 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:42.253907 1849924 cri.go:89] found id: ""
	I1124 09:56:42.253922 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.253929 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:42.253935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:42.253998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:42.282770 1849924 cri.go:89] found id: ""
	I1124 09:56:42.282786 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.282793 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:42.282800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:42.282811 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:42.298712 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:42.298729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:42.363239 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:42.363249 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:42.363260 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.437643 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:42.437663 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:42.475221 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:42.475237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:45.048529 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:45.067334 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:45.067432 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:45.099636 1849924 cri.go:89] found id: ""
	I1124 09:56:45.099652 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.099659 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:45.099666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:45.099762 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:45.132659 1849924 cri.go:89] found id: ""
	I1124 09:56:45.132693 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.132701 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:45.132708 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:45.132792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:45.169282 1849924 cri.go:89] found id: ""
	I1124 09:56:45.169306 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.169314 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:45.169320 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:45.169398 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:45.226517 1849924 cri.go:89] found id: ""
	I1124 09:56:45.226533 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.226542 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:45.226548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:45.226626 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:45.265664 1849924 cri.go:89] found id: ""
	I1124 09:56:45.265680 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.265687 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:45.265693 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:45.265759 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:45.298503 1849924 cri.go:89] found id: ""
	I1124 09:56:45.298517 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.298525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:45.298531 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:45.298599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:45.329403 1849924 cri.go:89] found id: ""
	I1124 09:56:45.329436 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.329445 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:45.329453 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:45.329464 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:45.345344 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:45.345361 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:45.412742 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:45.412752 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:45.412763 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:45.493978 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:45.493998 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:45.531425 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:45.531441 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.098018 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:48.108764 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:48.108836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:48.134307 1849924 cri.go:89] found id: ""
	I1124 09:56:48.134321 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.134328 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:48.134333 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:48.134390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:48.159252 1849924 cri.go:89] found id: ""
	I1124 09:56:48.159266 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.159273 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:48.159279 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:48.159337 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:48.184464 1849924 cri.go:89] found id: ""
	I1124 09:56:48.184478 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.184496 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:48.184507 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:48.184589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:48.209500 1849924 cri.go:89] found id: ""
	I1124 09:56:48.209513 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.209520 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:48.209526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:48.209590 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:48.236025 1849924 cri.go:89] found id: ""
	I1124 09:56:48.236039 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.236045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:48.236051 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:48.236121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:48.262196 1849924 cri.go:89] found id: ""
	I1124 09:56:48.262210 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.262216 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:48.262222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:48.262285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:48.286684 1849924 cri.go:89] found id: ""
	I1124 09:56:48.286698 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.286705 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:48.286712 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:48.286725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.354155 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:48.354174 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:48.369606 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:48.369625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:48.436183 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:48.436193 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:48.436207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:48.516667 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:48.516688 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.047020 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:51.057412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:51.057477 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:51.087137 1849924 cri.go:89] found id: ""
	I1124 09:56:51.087159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.087167 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:51.087172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:51.087241 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:51.115003 1849924 cri.go:89] found id: ""
	I1124 09:56:51.115018 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.115025 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:51.115031 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:51.115093 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:51.144604 1849924 cri.go:89] found id: ""
	I1124 09:56:51.144622 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.144631 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:51.144638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:51.144706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:51.172310 1849924 cri.go:89] found id: ""
	I1124 09:56:51.172323 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.172338 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:51.172345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:51.172413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:51.200354 1849924 cri.go:89] found id: ""
	I1124 09:56:51.200376 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.200384 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:51.200390 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:51.200463 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:51.225889 1849924 cri.go:89] found id: ""
	I1124 09:56:51.225903 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.225911 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:51.225917 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:51.225974 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:51.250937 1849924 cri.go:89] found id: ""
	I1124 09:56:51.250950 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.250956 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:51.250972 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:51.250984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.281935 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:51.281951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:51.346955 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:51.346975 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:51.362412 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:51.362428 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:51.424513 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:51.424523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:51.424534 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.006160 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:54.017499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:54.017565 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:54.048035 1849924 cri.go:89] found id: ""
	I1124 09:56:54.048049 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.048056 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:54.048062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:54.048117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:54.075193 1849924 cri.go:89] found id: ""
	I1124 09:56:54.075207 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.075214 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:54.075220 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:54.075278 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:54.101853 1849924 cri.go:89] found id: ""
	I1124 09:56:54.101868 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.101875 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:54.101880 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:54.101938 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:54.128585 1849924 cri.go:89] found id: ""
	I1124 09:56:54.128600 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.128608 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:54.128614 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:54.128673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:54.154726 1849924 cri.go:89] found id: ""
	I1124 09:56:54.154742 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.154750 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:54.154756 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:54.154819 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:54.180936 1849924 cri.go:89] found id: ""
	I1124 09:56:54.180975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.180984 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:54.180990 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:54.181070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:54.209038 1849924 cri.go:89] found id: ""
	I1124 09:56:54.209060 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.209067 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:54.209075 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:54.209085 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:54.279263 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:54.279289 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:54.295105 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:54.295131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:54.367337 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:54.367348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:54.367360 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.442973 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:54.442995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:56.980627 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:56.990375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:56.990434 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:57.016699 1849924 cri.go:89] found id: ""
	I1124 09:56:57.016713 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.016720 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:57.016726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:57.016789 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:57.042924 1849924 cri.go:89] found id: ""
	I1124 09:56:57.042938 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.042945 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:57.042950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:57.043009 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:57.071972 1849924 cri.go:89] found id: ""
	I1124 09:56:57.071986 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.071993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:57.071998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:57.072057 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:57.097765 1849924 cri.go:89] found id: ""
	I1124 09:56:57.097780 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.097789 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:57.097796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:57.097861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:57.124764 1849924 cri.go:89] found id: ""
	I1124 09:56:57.124778 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.124796 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:57.124802 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:57.124871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:57.151558 1849924 cri.go:89] found id: ""
	I1124 09:56:57.151584 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.151591 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:57.151597 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:57.151667 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:57.178335 1849924 cri.go:89] found id: ""
	I1124 09:56:57.178348 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.178355 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:57.178372 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:57.178383 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:57.253968 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:57.253988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:57.284364 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:57.284380 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:57.349827 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:57.349847 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:57.364617 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:57.364633 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:57.425688 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:59.926489 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:59.936801 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:59.936870 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:59.961715 1849924 cri.go:89] found id: ""
	I1124 09:56:59.961728 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.961735 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:59.961741 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:59.961801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:59.990466 1849924 cri.go:89] found id: ""
	I1124 09:56:59.990480 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.990488 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:59.990494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:59.990554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:00.129137 1849924 cri.go:89] found id: ""
	I1124 09:57:00.129161 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.129169 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:00.129175 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:00.129257 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:00.211462 1849924 cri.go:89] found id: ""
	I1124 09:57:00.211478 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.211490 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:00.211506 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:00.211593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:00.274315 1849924 cri.go:89] found id: ""
	I1124 09:57:00.274338 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.274346 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:00.274363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:00.274453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:00.321199 1849924 cri.go:89] found id: ""
	I1124 09:57:00.321233 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.321241 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:00.321247 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:00.321324 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:00.372845 1849924 cri.go:89] found id: ""
	I1124 09:57:00.372861 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.372869 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:00.372878 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:00.372889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:00.444462 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:00.444485 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:00.465343 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:00.465381 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:00.553389 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:00.553402 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:00.553418 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:00.632199 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:00.632219 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:03.162773 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:03.173065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:03.173150 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:03.200418 1849924 cri.go:89] found id: ""
	I1124 09:57:03.200431 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.200439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:03.200444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:03.200502 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:03.227983 1849924 cri.go:89] found id: ""
	I1124 09:57:03.227997 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.228004 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:03.228009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:03.228070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:03.257554 1849924 cri.go:89] found id: ""
	I1124 09:57:03.257568 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.257575 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:03.257581 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:03.257639 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:03.283198 1849924 cri.go:89] found id: ""
	I1124 09:57:03.283210 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.283217 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:03.283223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:03.283280 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:03.307981 1849924 cri.go:89] found id: ""
	I1124 09:57:03.307994 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.308002 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:03.308007 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:03.308063 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:03.337021 1849924 cri.go:89] found id: ""
	I1124 09:57:03.337035 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.337042 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:03.337047 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:03.337130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:03.362116 1849924 cri.go:89] found id: ""
	I1124 09:57:03.362130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.362137 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:03.362144 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:03.362155 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:03.427932 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:03.427951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:03.442952 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:03.442968 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:03.527978 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:03.527989 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:03.528002 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:03.603993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:03.604012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.134966 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:06.147607 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:06.147673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:06.173217 1849924 cri.go:89] found id: ""
	I1124 09:57:06.173231 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.173238 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:06.173243 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:06.173302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:06.203497 1849924 cri.go:89] found id: ""
	I1124 09:57:06.203511 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.203518 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:06.203524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:06.203581 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:06.232192 1849924 cri.go:89] found id: ""
	I1124 09:57:06.232205 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.232212 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:06.232219 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:06.232276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:06.261698 1849924 cri.go:89] found id: ""
	I1124 09:57:06.261711 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.261717 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:06.261723 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:06.261779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:06.286623 1849924 cri.go:89] found id: ""
	I1124 09:57:06.286642 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.286650 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:06.286656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:06.286717 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:06.316085 1849924 cri.go:89] found id: ""
	I1124 09:57:06.316098 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.316105 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:06.316110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:06.316169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:06.344243 1849924 cri.go:89] found id: ""
	I1124 09:57:06.344257 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.344264 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:06.344273 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:06.344283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.375793 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:06.375809 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:06.441133 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:06.441160 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:06.457259 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:06.457282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:06.534017 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:06.534028 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:06.534040 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.110740 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:09.122421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:09.122484 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:09.148151 1849924 cri.go:89] found id: ""
	I1124 09:57:09.148165 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.148172 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:09.148177 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:09.148235 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:09.173265 1849924 cri.go:89] found id: ""
	I1124 09:57:09.173279 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.173288 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:09.173295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:09.173357 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:09.198363 1849924 cri.go:89] found id: ""
	I1124 09:57:09.198377 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.198384 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:09.198389 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:09.198447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:09.224567 1849924 cri.go:89] found id: ""
	I1124 09:57:09.224581 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.224588 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:09.224594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:09.224652 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:09.249182 1849924 cri.go:89] found id: ""
	I1124 09:57:09.249195 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.249205 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:09.249210 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:09.249281 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:09.274039 1849924 cri.go:89] found id: ""
	I1124 09:57:09.274053 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.274060 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:09.274065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:09.274125 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:09.299730 1849924 cri.go:89] found id: ""
	I1124 09:57:09.299744 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.299751 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:09.299758 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:09.299770 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:09.364094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:09.364105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:09.364120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.441482 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:09.441504 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:09.479944 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:09.479961 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:09.549349 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:09.549367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:12.064927 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:12.075315 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:12.075376 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:12.103644 1849924 cri.go:89] found id: ""
	I1124 09:57:12.103658 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.103665 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:12.103670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:12.103774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:12.129120 1849924 cri.go:89] found id: ""
	I1124 09:57:12.129134 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.129141 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:12.129147 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:12.129215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:12.156010 1849924 cri.go:89] found id: ""
	I1124 09:57:12.156024 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.156031 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:12.156036 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:12.156094 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:12.184275 1849924 cri.go:89] found id: ""
	I1124 09:57:12.184289 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.184296 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:12.184301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:12.184362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:12.214700 1849924 cri.go:89] found id: ""
	I1124 09:57:12.214713 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.214726 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:12.214732 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:12.214792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:12.239546 1849924 cri.go:89] found id: ""
	I1124 09:57:12.239559 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.239566 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:12.239572 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:12.239635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:12.264786 1849924 cri.go:89] found id: ""
	I1124 09:57:12.264800 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.264806 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:12.264814 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:12.264826 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:12.324457 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:12.324467 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:12.324477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:12.401396 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:12.401417 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:12.432520 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:12.432535 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:12.502857 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:12.502877 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.018809 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:15.038661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:15.038741 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:15.069028 1849924 cri.go:89] found id: ""
	I1124 09:57:15.069043 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.069050 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:15.069056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:15.069139 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:15.096495 1849924 cri.go:89] found id: ""
	I1124 09:57:15.096513 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.096521 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:15.096526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:15.096593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:15.125417 1849924 cri.go:89] found id: ""
	I1124 09:57:15.125430 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.125438 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:15.125444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:15.125508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:15.152259 1849924 cri.go:89] found id: ""
	I1124 09:57:15.152274 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.152281 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:15.152287 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:15.152348 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:15.178920 1849924 cri.go:89] found id: ""
	I1124 09:57:15.178934 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.178942 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:15.178947 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:15.179024 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:15.207630 1849924 cri.go:89] found id: ""
	I1124 09:57:15.207643 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.207650 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:15.207656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:15.207715 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:15.237971 1849924 cri.go:89] found id: ""
	I1124 09:57:15.237985 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.237992 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:15.238000 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:15.238011 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:15.305169 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:15.305187 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.320240 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:15.320257 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:15.393546 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:15.393556 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:15.393592 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:15.470159 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:15.470179 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:18.001255 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:18.013421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:18.013488 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:18.040787 1849924 cri.go:89] found id: ""
	I1124 09:57:18.040801 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.040808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:18.040814 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:18.040873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:18.066460 1849924 cri.go:89] found id: ""
	I1124 09:57:18.066475 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.066482 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:18.066487 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:18.066544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:18.093970 1849924 cri.go:89] found id: ""
	I1124 09:57:18.093983 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.093990 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:18.093998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:18.094070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:18.119292 1849924 cri.go:89] found id: ""
	I1124 09:57:18.119306 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.119312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:18.119318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:18.119375 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:18.144343 1849924 cri.go:89] found id: ""
	I1124 09:57:18.144356 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.144363 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:18.144369 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:18.144428 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:18.176349 1849924 cri.go:89] found id: ""
	I1124 09:57:18.176362 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.176369 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:18.176375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:18.176435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:18.200900 1849924 cri.go:89] found id: ""
	I1124 09:57:18.200913 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.200920 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:18.200927 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:18.200938 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:18.266434 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:18.266452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:18.281611 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:18.281627 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:18.347510 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:18.347523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:18.347536 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:18.435234 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:18.435254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:20.973569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:20.984347 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:20.984418 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:21.011115 1849924 cri.go:89] found id: ""
	I1124 09:57:21.011130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.011137 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:21.011142 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:21.011204 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:21.041877 1849924 cri.go:89] found id: ""
	I1124 09:57:21.041891 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.041899 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:21.041904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:21.041963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:21.067204 1849924 cri.go:89] found id: ""
	I1124 09:57:21.067217 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.067224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:21.067229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:21.067288 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:21.096444 1849924 cri.go:89] found id: ""
	I1124 09:57:21.096458 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.096464 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:21.096470 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:21.096526 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:21.122011 1849924 cri.go:89] found id: ""
	I1124 09:57:21.122025 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.122033 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:21.122038 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:21.122098 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:21.150504 1849924 cri.go:89] found id: ""
	I1124 09:57:21.150518 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.150525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:21.150530 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:21.150601 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:21.179560 1849924 cri.go:89] found id: ""
	I1124 09:57:21.179573 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.179579 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:21.179587 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:21.179597 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:21.263112 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:21.263134 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:21.291875 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:21.291891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:21.358120 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:21.358139 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:21.373381 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:21.373401 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:21.437277 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:23.938404 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:23.948703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:23.948770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:23.975638 1849924 cri.go:89] found id: ""
	I1124 09:57:23.975653 1849924 logs.go:282] 0 containers: []
	W1124 09:57:23.975660 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:23.975666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:23.975797 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:24.003099 1849924 cri.go:89] found id: ""
	I1124 09:57:24.003114 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.003122 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:24.003127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:24.003195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:24.031320 1849924 cri.go:89] found id: ""
	I1124 09:57:24.031333 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.031340 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:24.031345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:24.031412 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:24.057464 1849924 cri.go:89] found id: ""
	I1124 09:57:24.057479 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.057486 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:24.057491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:24.057560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:24.083571 1849924 cri.go:89] found id: ""
	I1124 09:57:24.083586 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.083593 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:24.083598 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:24.083656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:24.109710 1849924 cri.go:89] found id: ""
	I1124 09:57:24.109724 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.109732 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:24.109737 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:24.109810 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:24.134957 1849924 cri.go:89] found id: ""
	I1124 09:57:24.134971 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.134978 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:24.134985 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:24.134995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:24.206698 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:24.206725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:24.221977 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:24.221995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:24.287450 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:24.287461 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:24.287474 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:24.364870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:24.364890 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:26.899825 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:26.911192 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:26.911260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:26.937341 1849924 cri.go:89] found id: ""
	I1124 09:57:26.937355 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.937361 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:26.937367 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:26.937429 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:26.966037 1849924 cri.go:89] found id: ""
	I1124 09:57:26.966050 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.966057 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:26.966062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:26.966119 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:26.994487 1849924 cri.go:89] found id: ""
	I1124 09:57:26.994501 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.994508 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:26.994514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:26.994572 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:27.024331 1849924 cri.go:89] found id: ""
	I1124 09:57:27.024345 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.024351 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:27.024357 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:27.024414 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:27.051922 1849924 cri.go:89] found id: ""
	I1124 09:57:27.051936 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.051943 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:27.051949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:27.052007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:27.079084 1849924 cri.go:89] found id: ""
	I1124 09:57:27.079097 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.079104 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:27.079110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:27.079166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:27.105333 1849924 cri.go:89] found id: ""
	I1124 09:57:27.105346 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.105362 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:27.105371 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:27.105399 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:27.136135 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:27.136151 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:27.202777 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:27.202797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:27.218147 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:27.218169 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:27.287094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:27.287105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:27.287116 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:29.863883 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:29.874162 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:29.874270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:29.899809 1849924 cri.go:89] found id: ""
	I1124 09:57:29.899825 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.899833 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:29.899839 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:29.899897 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:29.925268 1849924 cri.go:89] found id: ""
	I1124 09:57:29.925282 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.925289 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:29.925295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:29.925355 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:29.953756 1849924 cri.go:89] found id: ""
	I1124 09:57:29.953770 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.953778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:29.953783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:29.953844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:29.979723 1849924 cri.go:89] found id: ""
	I1124 09:57:29.979737 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.979744 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:29.979750 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:29.979809 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:30.029207 1849924 cri.go:89] found id: ""
	I1124 09:57:30.029223 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.029231 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:30.029237 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:30.029307 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:30.086347 1849924 cri.go:89] found id: ""
	I1124 09:57:30.086364 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.086374 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:30.086381 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:30.086453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:30.117385 1849924 cri.go:89] found id: ""
	I1124 09:57:30.117412 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.117420 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:30.117429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:30.117442 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:30.134069 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:30.134089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:30.200106 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:30.200116 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:30.200131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:30.277714 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:30.277734 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:30.306530 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:30.306548 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:32.873889 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:32.884169 1849924 kubeadm.go:602] duration metric: took 4m3.946947382s to restartPrimaryControlPlane
	W1124 09:57:32.884229 1849924 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 09:57:32.884313 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 09:57:33.294612 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:57:33.307085 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:57:33.314867 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:57:33.314936 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:57:33.322582 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:57:33.322593 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 09:57:33.322667 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:57:33.330196 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:57:33.330260 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:57:33.337917 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:57:33.345410 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:57:33.345471 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:57:33.352741 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.360084 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:57:33.360141 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.367359 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:57:33.374680 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:57:33.374740 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:57:33.381720 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:57:33.421475 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:57:33.421672 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:57:33.492568 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:57:33.492631 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:57:33.492668 1849924 kubeadm.go:319] OS: Linux
	I1124 09:57:33.492712 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:57:33.492759 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:57:33.492805 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:57:33.492852 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:57:33.492898 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:57:33.492945 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:57:33.492989 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:57:33.493036 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:57:33.493080 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:57:33.559811 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:57:33.559935 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:57:33.560031 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:57:33.569641 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:57:33.572593 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 09:57:33.572694 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:57:33.572778 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:57:33.572897 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:57:33.572970 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:57:33.573053 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:57:33.573134 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:57:33.573209 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:57:33.573281 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:57:33.573362 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:57:33.573444 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:57:33.573489 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:57:33.573554 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:57:34.404229 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:57:34.574070 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:57:34.974228 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:57:35.133185 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:57:35.260833 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:57:35.261355 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:57:35.265684 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:57:35.269119 1849924 out.go:252]   - Booting up control plane ...
	I1124 09:57:35.269213 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:57:35.269289 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:57:35.269807 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:57:35.284618 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:57:35.284910 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:57:35.293324 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:57:35.293620 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:57:35.293661 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:57:35.424973 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:57:35.425087 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:01:35.425195 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000242606s
	I1124 10:01:35.425226 1849924 kubeadm.go:319] 
	I1124 10:01:35.425316 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:01:35.425374 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:01:35.425488 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:01:35.425495 1849924 kubeadm.go:319] 
	I1124 10:01:35.425617 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:01:35.425655 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:01:35.425685 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:01:35.425690 1849924 kubeadm.go:319] 
	I1124 10:01:35.429378 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:01:35.429792 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:01:35.429899 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:01:35.430134 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:01:35.430138 1849924 kubeadm.go:319] 
	I1124 10:01:35.430206 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1124 10:01:35.430308 1849924 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242606s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1124 10:01:35.430396 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 10:01:35.837421 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:01:35.850299 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:01:35.850356 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:01:35.858169 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:01:35.858180 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 10:01:35.858230 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 10:01:35.866400 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:01:35.866456 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:01:35.873856 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 10:01:35.881958 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:01:35.882015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:01:35.889339 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.896920 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:01:35.896977 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.904670 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 10:01:35.912117 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:01:35.912171 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:01:35.919741 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:01:35.956259 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 10:01:35.956313 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:01:36.031052 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:01:36.031118 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:01:36.031152 1849924 kubeadm.go:319] OS: Linux
	I1124 10:01:36.031196 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:01:36.031243 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:01:36.031289 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:01:36.031336 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:01:36.031383 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:01:36.031430 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:01:36.031474 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:01:36.031521 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:01:36.031566 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:01:36.099190 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:01:36.099321 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:01:36.099441 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 10:01:36.106857 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:01:36.112186 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 10:01:36.112274 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:01:36.112337 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:01:36.112413 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 10:01:36.112473 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 10:01:36.112542 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 10:01:36.112594 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 10:01:36.112656 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 10:01:36.112719 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 10:01:36.112792 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 10:01:36.112863 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 10:01:36.112900 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 10:01:36.112954 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:01:36.197295 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:01:36.531352 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 10:01:36.984185 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:01:37.290064 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:01:37.558441 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:01:37.559017 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:01:37.561758 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:01:37.564997 1849924 out.go:252]   - Booting up control plane ...
	I1124 10:01:37.565117 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:01:37.565200 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:01:37.566811 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:01:37.581952 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:01:37.582056 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 10:01:37.589882 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 10:01:37.590273 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:01:37.590483 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:01:37.733586 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 10:01:37.733692 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:05:37.728742 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000440097s
	I1124 10:05:37.728760 1849924 kubeadm.go:319] 
	I1124 10:05:37.729148 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:05:37.729217 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:05:37.729548 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:05:37.729554 1849924 kubeadm.go:319] 
	I1124 10:05:37.729744 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:05:37.729799 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:05:37.729853 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:05:37.729860 1849924 kubeadm.go:319] 
	I1124 10:05:37.734894 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:05:37.735345 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:05:37.735452 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:05:37.735693 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:05:37.735697 1849924 kubeadm.go:319] 
	I1124 10:05:37.735773 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 10:05:37.735829 1849924 kubeadm.go:403] duration metric: took 12m8.833752588s to StartCluster
	I1124 10:05:37.735872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:05:37.735930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:05:37.769053 1849924 cri.go:89] found id: ""
	I1124 10:05:37.769070 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.769076 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:05:37.769083 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:05:37.769166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:05:37.796753 1849924 cri.go:89] found id: ""
	I1124 10:05:37.796767 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.796774 1849924 logs.go:284] No container was found matching "etcd"
	I1124 10:05:37.796780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:05:37.796839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:05:37.822456 1849924 cri.go:89] found id: ""
	I1124 10:05:37.822470 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.822487 1849924 logs.go:284] No container was found matching "coredns"
	I1124 10:05:37.822492 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:05:37.822556 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:05:37.847572 1849924 cri.go:89] found id: ""
	I1124 10:05:37.847587 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.847594 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:05:37.847601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:05:37.847660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:05:37.874600 1849924 cri.go:89] found id: ""
	I1124 10:05:37.874614 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.874621 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:05:37.874630 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:05:37.874694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:05:37.899198 1849924 cri.go:89] found id: ""
	I1124 10:05:37.899212 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.899220 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:05:37.899226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:05:37.899286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:05:37.927492 1849924 cri.go:89] found id: ""
	I1124 10:05:37.927506 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.927513 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 10:05:37.927521 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 10:05:37.927531 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:05:37.996934 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 10:05:37.996954 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:05:38.018248 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:05:38.018265 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:05:38.095385 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:05:38.095401 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:05:38.095411 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:05:38.170993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 10:05:38.171016 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:05:38.204954 1849924 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 10:05:38.205004 1849924 out.go:285] * 
	W1124 10:05:38.205075 1849924 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.205091 1849924 out.go:285] * 
	W1124 10:05:38.207567 1849924 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:05:38.212617 1849924 out.go:203] 
	W1124 10:05:38.216450 1849924 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.216497 1849924 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 10:05:38.216516 1849924 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 10:05:38.219595 1849924 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.571892719Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=aef09199-0d9c-4fcd-a86e-4644b84003d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601271581Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601433691Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.60148682Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.673335433Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=df47687b-4b6a-4acb-8d1e-f46521441883 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.702968936Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703135847Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703183371Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.732961212Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.733138962Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.7331819Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.260656424Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=d07a4d73-f74e-45cd-9c4d-fd518a9e69a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301046166Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301216031Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.3012543Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.339616029Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.33997436Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.340022221Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.333274376Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=0d853bf6-0cff-41f5-a62e-2b21fedcbf72 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366217435Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366382032Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366430164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.391919753Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392065551Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392106118Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:07:59.212546   24068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:07:59.213148   24068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:07:59.214648   24068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:07:59.215025   24068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:07:59.216549   24068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:07:59 up  8:50,  0 user,  load average: 0.27, 0.32, 0.39
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:07:56 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:07:57 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1147.
	Nov 24 10:07:57 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:07:57 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:07:57 functional-373432 kubelet[23960]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:07:57 functional-373432 kubelet[23960]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:07:57 functional-373432 kubelet[23960]: E1124 10:07:57.508678   23960 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:07:57 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:07:57 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:07:58 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1148.
	Nov 24 10:07:58 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:07:58 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:07:58 functional-373432 kubelet[23967]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:07:58 functional-373432 kubelet[23967]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:07:58 functional-373432 kubelet[23967]: E1124 10:07:58.266843   23967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:07:58 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:07:58 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:07:58 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1149.
	Nov 24 10:07:58 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:07:58 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:07:59 functional-373432 kubelet[24015]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:07:59 functional-373432 kubelet[24015]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:07:59 functional-373432 kubelet[24015]: E1124 10:07:59.019968   24015 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:07:59 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:07:59 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (553.032589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1124 10:06:03.238454 1806704 retry.go:31] will retry after 2.623239564s: Temporary Error: Get "http://10.107.117.210": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1124 10:06:15.862269 1806704 retry.go:31] will retry after 5.549598612s: Temporary Error: Get "http://10.107.117.210": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1124 10:06:31.413154 1806704 retry.go:31] will retry after 5.61969202s: Temporary Error: Get "http://10.107.117.210": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1124 10:06:47.033950 1806704 retry.go:31] will retry after 5.473176285s: Temporary Error: Get "http://10.107.117.210": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1124 10:07:02.508538 1806704 retry.go:31] will retry after 8.220153268s: Temporary Error: Get "http://10.107.117.210": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1124 10:07:20.729856 1806704 retry.go:31] will retry after 26.741722445s: Temporary Error: Get "http://10.107.117.210": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1124 10:07:54.300016 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (315.529688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (315.319823ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh            │ functional-373432 ssh findmnt -T /mount-9p | grep 9p                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh            │ functional-373432 ssh -- ls -la /mount-9p                                                                                                     │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh            │ functional-373432 ssh sudo umount -f /mount-9p                                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount          │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount1 --alsologtostderr -v=1          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount          │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount2 --alsologtostderr -v=1          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ mount          │ -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount3 --alsologtostderr -v=1          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ ssh            │ functional-373432 ssh findmnt -T /mount1                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh            │ functional-373432 ssh findmnt -T /mount2                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh            │ functional-373432 ssh findmnt -T /mount3                                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ mount          │ -p functional-373432 --kill=true                                                                                                              │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ start          │ -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ start          │ -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ start          │ -p functional-373432 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-373432 --alsologtostderr -v=1                                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ update-context │ functional-373432 update-context --alsologtostderr -v=2                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ update-context │ functional-373432 update-context --alsologtostderr -v=2                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ update-context │ functional-373432 update-context --alsologtostderr -v=2                                                                                       │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ image          │ functional-373432 image ls --format short --alsologtostderr                                                                                   │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ image          │ functional-373432 image ls --format yaml --alsologtostderr                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ ssh            │ functional-373432 ssh pgrep buildkitd                                                                                                         │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │                     │
	│ image          │ functional-373432 image build -t localhost/my-image:functional-373432 testdata/build --alsologtostderr                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ image          │ functional-373432 image ls                                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ image          │ functional-373432 image ls --format json --alsologtostderr                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	│ image          │ functional-373432 image ls --format table --alsologtostderr                                                                                   │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:08 UTC │ 24 Nov 25 10:08 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 10:08:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 10:08:13.424963 1868773 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:08:13.425165 1868773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.425197 1868773 out.go:374] Setting ErrFile to fd 2...
	I1124 10:08:13.425218 1868773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.425512 1868773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:08:13.425913 1868773 out.go:368] Setting JSON to false
	I1124 10:08:13.426791 1868773 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31844,"bootTime":1763947050,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:08:13.426892 1868773 start.go:143] virtualization:  
	I1124 10:08:13.430180 1868773 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 10:08:13.433949 1868773 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:08:13.434016 1868773 notify.go:221] Checking for updates...
	I1124 10:08:13.440162 1868773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:08:13.442942 1868773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:08:13.445792 1868773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:08:13.448556 1868773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:08:13.451408 1868773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:08:13.454901 1868773 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:08:13.455498 1868773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:08:13.483789 1868773 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:08:13.483893 1868773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:08:13.534092 1868773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:13.524576717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:08:13.534192 1868773 docker.go:319] overlay module found
	I1124 10:08:13.537292 1868773 out.go:179] * Using the docker driver based on existing profile
	I1124 10:08:13.540209 1868773 start.go:309] selected driver: docker
	I1124 10:08:13.540227 1868773 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:08:13.540322 1868773 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:08:13.540435 1868773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:08:13.594036 1868773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:13.585629488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:08:13.594447 1868773 cni.go:84] Creating CNI manager for ""
	I1124 10:08:13.594516 1868773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:08:13.594572 1868773 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:08:13.597775 1868773 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.571892719Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=aef09199-0d9c-4fcd-a86e-4644b84003d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601271581Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601433691Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.60148682Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.673335433Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=df47687b-4b6a-4acb-8d1e-f46521441883 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.702968936Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703135847Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.703183371Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=30547a19-5419-4812-a74c-eaca0229abe9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.732961212Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.733138962Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.7331819Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=ece51449-d954-45ad-abba-a2cf8b7ef65d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.260656424Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=d07a4d73-f74e-45cd-9c4d-fd518a9e69a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301046166Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.301216031Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.3012543Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=97911ea1-2701-4bd6-a9fd-8ec55c257f60 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.339616029Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.33997436Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:50 functional-373432 crio[10735]: time="2025-11-24T10:05:50.340022221Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=742b0be5-2727-4639-be3d-83b3951a114e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.333274376Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=0d853bf6-0cff-41f5-a62e-2b21fedcbf72 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366217435Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366382032Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.366430164Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=cd62e40d-0c2e-4515-9966-8e42fe27e0ec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.391919753Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392065551Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:51 functional-373432 crio[10735]: time="2025-11-24T10:05:51.392106118Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=c7ea60fb-b20f-4f34-ac44-ccff2d657893 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:09:54.430070   26168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:09:54.430971   26168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:09:54.431756   26168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:09:54.432947   26168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:09:54.433611   26168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:09:54 up  8:52,  0 user,  load average: 0.35, 0.45, 0.44
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:09:52 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:09:52 functional-373432 kubelet[26042]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:09:52 functional-373432 kubelet[26042]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:09:52 functional-373432 kubelet[26042]: E1124 10:09:52.247770   26042 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:09:52 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:09:52 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:09:52 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1301.
	Nov 24 10:09:52 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:09:52 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:09:52 functional-373432 kubelet[26048]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:09:52 functional-373432 kubelet[26048]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:09:52 functional-373432 kubelet[26048]: E1124 10:09:52.997764   26048 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:09:53 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:09:53 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:09:53 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1302.
	Nov 24 10:09:53 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:09:53 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:09:53 functional-373432 kubelet[26076]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:09:53 functional-373432 kubelet[26076]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:09:53 functional-373432 kubelet[26076]: E1124 10:09:53.753113   26076 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:09:53 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:09:53 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:09:54 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1303.
	Nov 24 10:09:54 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:09:54 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (304.468895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-373432 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-373432 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (107.515769ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-373432 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-373432
helpers_test.go:243: (dbg) docker inspect functional-373432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	        "Created": "2025-11-24T09:38:28.400939169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:38:28.471709183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65/ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65-json.log",
	        "Name": "/functional-373432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-373432:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-373432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3e2c9d5b10b2a5c5765bd5a0197e035ce78226c7272d47ceac731c7c5aad65",
	                "LowerDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5044f3ac256e8e67a067ceacba27b174e505cd151a3cc8482dcd34895cf1815/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-373432",
	                "Source": "/var/lib/docker/volumes/functional-373432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-373432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-373432",
	                "name.minikube.sigs.k8s.io": "functional-373432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "690ce9ceb0bda21617ebe03b462f193dcf2fc729d44ad57d476a6d9aef441653",
	            "SandboxKey": "/var/run/docker/netns/690ce9ceb0bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35005"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35006"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35008"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-373432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:9d:5d:72:0a:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef974a48341fbe78fbc2558a0881eb99cedddf92e17155f2ff31375612afdf3f",
	                    "EndpointID": "4cc34c91c2af483f16f3c4397488debfa11a732a8f32b619438ba8f028d7318c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-373432",
	                        "ed3e2c9d5b10"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-373432 -n functional-373432: exit status 2 (379.809526ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 logs -n 25: (1.408299867s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-373432 ssh sudo crictl images                                                                                                                     │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh     │ functional-373432 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh     │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ cache   │ functional-373432 cache reload                                                                                                                               │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ ssh     │ functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │ 24 Nov 25 09:53 UTC │
	│ kubectl │ functional-373432 kubectl -- --context functional-373432 get pods                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ start   │ -p functional-373432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 09:53 UTC │                     │
	│ cp      │ functional-373432 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ config  │ functional-373432 config unset cpus                                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ config  │ functional-373432 config get cpus                                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │                     │
	│ config  │ functional-373432 config set cpus 2                                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ config  │ functional-373432 config get cpus                                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ config  │ functional-373432 config unset cpus                                                                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh -n functional-373432 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ config  │ functional-373432 config get cpus                                                                                                                            │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ cp      │ functional-373432 cp functional-373432:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3998041042/001/cp-test.txt │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo systemctl is-active docker                                                                                                        │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │                     │
	│ ssh     │ functional-373432 ssh -n functional-373432 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh sudo systemctl is-active containerd                                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │                     │
	│ cp      │ functional-373432 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ ssh     │ functional-373432 ssh -n functional-373432 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	│ image   │ functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr                                                                │ functional-373432 │ jenkins │ v1.37.0 │ 24 Nov 25 10:05 UTC │ 24 Nov 25 10:05 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:53:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:53:23.394373 1849924 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:53:23.394473 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394476 1849924 out.go:374] Setting ErrFile to fd 2...
	I1124 09:53:23.394480 1849924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:53:23.394868 1849924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:53:23.395314 1849924 out.go:368] Setting JSON to false
	I1124 09:53:23.396438 1849924 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30954,"bootTime":1763947050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:53:23.396523 1849924 start.go:143] virtualization:  
	I1124 09:53:23.399850 1849924 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:53:23.403618 1849924 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:53:23.403698 1849924 notify.go:221] Checking for updates...
	I1124 09:53:23.409546 1849924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:53:23.412497 1849924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:53:23.415264 1849924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:53:23.418109 1849924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:53:23.420908 1849924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:53:23.424158 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:23.424263 1849924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:53:23.449398 1849924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:53:23.449524 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.505939 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.496540271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.506033 1849924 docker.go:319] overlay module found
	I1124 09:53:23.509224 1849924 out.go:179] * Using the docker driver based on existing profile
	I1124 09:53:23.512245 1849924 start.go:309] selected driver: docker
	I1124 09:53:23.512255 1849924 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.512340 1849924 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:53:23.512454 1849924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:53:23.568317 1849924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 09:53:23.558792888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:53:23.568738 1849924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:53:23.568763 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:23.568821 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:23.568862 1849924 start.go:353] cluster config:
	{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:23.571988 1849924 out.go:179] * Starting "functional-373432" primary control-plane node in "functional-373432" cluster
	I1124 09:53:23.574929 1849924 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:53:23.577959 1849924 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:53:23.580671 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:23.580735 1849924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:53:23.600479 1849924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:53:23.600490 1849924 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:53:23.634350 1849924 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 09:53:24.054820 1849924 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 09:53:24.054990 1849924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/config.json ...
	I1124 09:53:24.055122 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.055240 1849924 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:53:24.055269 1849924 start.go:360] acquireMachinesLock for functional-373432: {Name:mk8b07b99ed5edd55893106dae425ab43134e2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.055313 1849924 start.go:364] duration metric: took 27.192µs to acquireMachinesLock for "functional-373432"
	I1124 09:53:24.055327 1849924 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:53:24.055331 1849924 fix.go:54] fixHost starting: 
	I1124 09:53:24.055580 1849924 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 09:53:24.072844 1849924 fix.go:112] recreateIfNeeded on functional-373432: state=Running err=<nil>
	W1124 09:53:24.072865 1849924 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:53:24.076050 1849924 out.go:252] * Updating the running docker "functional-373432" container ...
	I1124 09:53:24.076079 1849924 machine.go:94] provisionDockerMachine start ...
	I1124 09:53:24.076162 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.100870 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.101221 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.101228 1849924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:53:24.232623 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.252893 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.252907 1849924 ubuntu.go:182] provisioning hostname "functional-373432"
	I1124 09:53:24.252988 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.280057 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.280362 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.280376 1849924 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-373432 && echo "functional-373432" | sudo tee /etc/hostname
	I1124 09:53:24.402975 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:24.467980 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-373432
	
	I1124 09:53:24.468079 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:24.499770 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:24.500067 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:24.500084 1849924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-373432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-373432/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-373432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:53:24.556663 1849924 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556759 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:53:24.556767 1849924 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.133µs
	I1124 09:53:24.556774 1849924 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:53:24.556785 1849924 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556814 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:53:24.556818 1849924 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.266µs
	I1124 09:53:24.556823 1849924 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556832 1849924 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556867 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:53:24.556871 1849924 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 39.738µs
	I1124 09:53:24.556876 1849924 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556884 1849924 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556911 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:53:24.556915 1849924 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.655µs
	I1124 09:53:24.556920 1849924 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556934 1849924 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556959 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:53:24.556963 1849924 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 35.948µs
	I1124 09:53:24.556967 1849924 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:53:24.556975 1849924 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.556999 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:53:24.557011 1849924 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.226µs
	I1124 09:53:24.557015 1849924 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:53:24.557023 1849924 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557048 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:53:24.557051 1849924 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 29.202µs
	I1124 09:53:24.557056 1849924 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:53:24.557065 1849924 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:53:24.557089 1849924 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:53:24.557093 1849924 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.258µs
	I1124 09:53:24.557097 1849924 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:53:24.557129 1849924 cache.go:87] Successfully saved all images to host disk.
	I1124 09:53:24.653937 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:53:24.653952 1849924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 09:53:24.653984 1849924 ubuntu.go:190] setting up certificates
	I1124 09:53:24.653993 1849924 provision.go:84] configureAuth start
	I1124 09:53:24.654058 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:24.671316 1849924 provision.go:143] copyHostCerts
	I1124 09:53:24.671391 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 09:53:24.671399 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 09:53:24.671473 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 09:53:24.671573 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 09:53:24.671577 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 09:53:24.671611 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 09:53:24.671659 1849924 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 09:53:24.671662 1849924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 09:53:24.671684 1849924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 09:53:24.671727 1849924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.functional-373432 san=[127.0.0.1 192.168.49.2 functional-373432 localhost minikube]
	I1124 09:53:25.074688 1849924 provision.go:177] copyRemoteCerts
	I1124 09:53:25.074752 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:53:25.074789 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.095886 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.200905 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:53:25.221330 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 09:53:25.243399 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:53:25.263746 1849924 provision.go:87] duration metric: took 609.720286ms to configureAuth
	I1124 09:53:25.263762 1849924 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:53:25.263945 1849924 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 09:53:25.264045 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.283450 1849924 main.go:143] libmachine: Using SSH client type: native
	I1124 09:53:25.283754 1849924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35005 <nil> <nil>}
	I1124 09:53:25.283770 1849924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 09:53:25.632249 1849924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 09:53:25.632261 1849924 machine.go:97] duration metric: took 1.556176004s to provisionDockerMachine
	I1124 09:53:25.632272 1849924 start.go:293] postStartSetup for "functional-373432" (driver="docker")
	I1124 09:53:25.632283 1849924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:53:25.632368 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:53:25.632405 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.650974 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.756910 1849924 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:53:25.760285 1849924 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:53:25.760302 1849924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:53:25.760312 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 09:53:25.760370 1849924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 09:53:25.760445 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 09:53:25.760518 1849924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts -> hosts in /etc/test/nested/copy/1806704
	I1124 09:53:25.760561 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1806704
	I1124 09:53:25.767953 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:25.785397 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts --> /etc/test/nested/copy/1806704/hosts (40 bytes)
	I1124 09:53:25.802531 1849924 start.go:296] duration metric: took 170.24573ms for postStartSetup
	I1124 09:53:25.802613 1849924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:53:25.802665 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.819451 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.922232 1849924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:53:25.926996 1849924 fix.go:56] duration metric: took 1.871657348s for fixHost
	I1124 09:53:25.927011 1849924 start.go:83] releasing machines lock for "functional-373432", held for 1.871691088s
	I1124 09:53:25.927085 1849924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-373432
	I1124 09:53:25.943658 1849924 ssh_runner.go:195] Run: cat /version.json
	I1124 09:53:25.943696 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.943958 1849924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:53:25.944002 1849924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 09:53:25.980808 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:25.985182 1849924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 09:53:26.175736 1849924 ssh_runner.go:195] Run: systemctl --version
	I1124 09:53:26.181965 1849924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 09:53:26.217601 1849924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:53:26.221860 1849924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:53:26.221923 1849924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:53:26.229857 1849924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:53:26.229870 1849924 start.go:496] detecting cgroup driver to use...
	I1124 09:53:26.229899 1849924 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 09:53:26.229945 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 09:53:26.244830 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 09:53:26.257783 1849924 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:53:26.257835 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:53:26.273202 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:53:26.286089 1849924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:53:26.392939 1849924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:53:26.505658 1849924 docker.go:234] disabling docker service ...
	I1124 09:53:26.505717 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:53:26.520682 1849924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:53:26.533901 1849924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:53:26.643565 1849924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:53:26.781643 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:53:26.794102 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:53:26.807594 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:26.964951 1849924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 09:53:26.965014 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.974189 1849924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 09:53:26.974248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.982757 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:26.991310 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.000248 1849924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:53:27.009837 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.019258 1849924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.028248 1849924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 09:53:27.037276 1849924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:53:27.045218 1849924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:53:27.052631 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:27.162722 1849924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 09:53:27.344834 1849924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 09:53:27.344893 1849924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 09:53:27.348791 1849924 start.go:564] Will wait 60s for crictl version
	I1124 09:53:27.348847 1849924 ssh_runner.go:195] Run: which crictl
	I1124 09:53:27.352314 1849924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:53:27.376797 1849924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 09:53:27.376884 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.404280 1849924 ssh_runner.go:195] Run: crio --version
	I1124 09:53:27.437171 1849924 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 09:53:27.439969 1849924 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:53:27.457621 1849924 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 09:53:27.466585 1849924 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 09:53:27.469312 1849924 kubeadm.go:884] updating cluster {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:53:27.469546 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.636904 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.787069 1849924 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 09:53:27.940573 1849924 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 09:53:27.940635 1849924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:53:27.974420 1849924 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 09:53:27.974431 1849924 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:53:27.974436 1849924 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1124 09:53:27.974527 1849924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-373432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:53:27.974612 1849924 ssh_runner.go:195] Run: crio config
	I1124 09:53:28.037679 1849924 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 09:53:28.037700 1849924 cni.go:84] Creating CNI manager for ""
	I1124 09:53:28.037709 1849924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:53:28.037724 1849924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:53:28.037750 1849924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-373432 NodeName:functional-373432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:53:28.037877 1849924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-373432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:53:28.037948 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:53:28.045873 1849924 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:53:28.045941 1849924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:53:28.053444 1849924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1124 09:53:28.066325 1849924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:53:28.079790 1849924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1124 09:53:28.092701 1849924 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:53:28.096834 1849924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:53:28.213078 1849924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:53:28.235943 1849924 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432 for IP: 192.168.49.2
	I1124 09:53:28.235953 1849924 certs.go:195] generating shared ca certs ...
	I1124 09:53:28.235988 1849924 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:53:28.236165 1849924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 09:53:28.236216 1849924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 09:53:28.236222 1849924 certs.go:257] generating profile certs ...
	I1124 09:53:28.236320 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.key
	I1124 09:53:28.236381 1849924 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key.0fcdf36b
	I1124 09:53:28.236430 1849924 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key
	I1124 09:53:28.236545 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 09:53:28.236581 1849924 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 09:53:28.236590 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:53:28.236617 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 09:53:28.236639 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:53:28.236676 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 09:53:28.236733 1849924 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 09:53:28.237452 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:53:28.267491 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:53:28.288261 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:53:28.304655 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:53:28.321607 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:53:28.339914 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:53:28.357697 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:53:28.374827 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:53:28.392170 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 09:53:28.410757 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 09:53:28.428776 1849924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:53:28.446790 1849924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:53:28.459992 1849924 ssh_runner.go:195] Run: openssl version
	I1124 09:53:28.466084 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 09:53:28.474433 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478225 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.478282 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 09:53:28.521415 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 09:53:28.529784 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 09:53:28.538178 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542108 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.542164 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 09:53:28.583128 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:53:28.591113 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:53:28.599457 1849924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603413 1849924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.603474 1849924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:53:28.645543 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:53:28.653724 1849924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:53:28.657603 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:53:28.698734 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:53:28.739586 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:53:28.780289 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:53:28.820840 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:53:28.861343 1849924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:53:28.902087 1849924 kubeadm.go:401] StartCluster: {Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:53:28.902167 1849924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 09:53:28.902236 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.929454 1849924 cri.go:89] found id: ""
	I1124 09:53:28.929519 1849924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:53:28.937203 1849924 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:53:28.937213 1849924 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:53:28.937261 1849924 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:53:28.944668 1849924 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:28.945209 1849924 kubeconfig.go:125] found "functional-373432" server: "https://192.168.49.2:8441"
	I1124 09:53:28.946554 1849924 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:53:28.956044 1849924 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 09:38:48.454819060 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 09:53:28.085978644 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 09:53:28.956053 1849924 kubeadm.go:1161] stopping kube-system containers ...
	I1124 09:53:28.956064 1849924 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 09:53:28.956128 1849924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:53:28.991786 1849924 cri.go:89] found id: ""
	I1124 09:53:28.991878 1849924 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 09:53:29.009992 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:53:29.018335 1849924 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 09:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 09:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Nov 24 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 24 09:42 /etc/kubernetes/scheduler.conf
	
	I1124 09:53:29.018393 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:53:29.026350 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:53:29.034215 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.034271 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:53:29.042061 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.049959 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.050015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:53:29.057477 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:53:29.065397 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:53:29.065453 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:53:29.072838 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:53:29.080812 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:29.126682 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:30.915283 1849924 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.788534288s)
	I1124 09:53:30.915375 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.124806 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.187302 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 09:53:31.234732 1849924 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:53:31.234802 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:31.735292 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.235922 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:32.735385 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.235894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:33.734984 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.235509 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:34.735644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.235724 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:35.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.235151 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:36.734994 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.235505 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:37.734925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.235891 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:38.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.235854 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:39.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.235929 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:40.734921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.234991 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:41.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.235015 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:42.734874 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.235403 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:43.734996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.235058 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:44.735496 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.235113 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:45.735894 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.234930 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:46.735636 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.234914 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:47.734875 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.235656 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:48.735578 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.235469 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:49.735823 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.235926 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:50.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.235524 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:51.735679 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.235407 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:52.735614 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.235868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:53.734868 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.235806 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:54.735801 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.235315 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:55.735919 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:56.735842 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.235491 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:57.735486 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.235122 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:58.735029 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.235002 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:53:59.735695 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.236092 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:00.735024 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.235917 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:01.735341 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.235291 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:02.735026 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.235183 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:03.735898 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.235334 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:04.734988 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.234896 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:05.735246 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.235531 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:06.735549 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.235579 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:07.735599 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.234953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:08.734946 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.235705 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:09.735908 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.234909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:10.735831 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.235563 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:11.735909 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.234992 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:12.735855 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.234936 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:13.734993 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.235585 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:14.734942 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.235013 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:15.735371 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.235016 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:16.735593 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.235921 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:17.735653 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.235793 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:18.734939 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.235317 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:19.735001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.235075 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:20.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.234969 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:21.735715 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.234859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:22.735010 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:23.734953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.235545 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:24.735305 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.235127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:25.734989 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.235601 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:26.734933 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.234986 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:27.735250 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.235727 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:28.734976 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.235644 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:29.735675 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.235004 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:30.735127 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:31.234921 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:31.235007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:31.266239 1849924 cri.go:89] found id: ""
	I1124 09:54:31.266252 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.266259 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:31.266265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:31.266323 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:31.294586 1849924 cri.go:89] found id: ""
	I1124 09:54:31.294608 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.294616 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:31.294623 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:31.294694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:31.322061 1849924 cri.go:89] found id: ""
	I1124 09:54:31.322076 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.322083 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:31.322088 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:31.322159 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:31.349139 1849924 cri.go:89] found id: ""
	I1124 09:54:31.349154 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.349161 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:31.349167 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:31.349230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:31.379824 1849924 cri.go:89] found id: ""
	I1124 09:54:31.379838 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.379845 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:31.379850 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:31.379915 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:31.407206 1849924 cri.go:89] found id: ""
	I1124 09:54:31.407220 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.407228 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:31.407233 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:31.407296 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:31.435102 1849924 cri.go:89] found id: ""
	I1124 09:54:31.435117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:31.435123 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:31.435132 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:31.435143 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:31.504759 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:31.504779 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:31.520567 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:31.520584 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:31.587634 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:31.579690   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.580431   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.581999   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.582413   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:31.583938   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:31.587666 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:31.587680 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:31.665843 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:31.665864 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.199426 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:34.210826 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:34.210886 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:34.249730 1849924 cri.go:89] found id: ""
	I1124 09:54:34.249743 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.249769 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:34.249774 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:34.249844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:34.279157 1849924 cri.go:89] found id: ""
	I1124 09:54:34.279171 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.279178 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:34.279183 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:34.279253 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:34.305617 1849924 cri.go:89] found id: ""
	I1124 09:54:34.305631 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.305655 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:34.305661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:34.305730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:34.331221 1849924 cri.go:89] found id: ""
	I1124 09:54:34.331235 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.331243 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:34.331249 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:34.331309 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:34.357361 1849924 cri.go:89] found id: ""
	I1124 09:54:34.357374 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.357381 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:34.357387 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:34.357447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:34.382790 1849924 cri.go:89] found id: ""
	I1124 09:54:34.382805 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.382812 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:34.382817 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:34.382882 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:34.408622 1849924 cri.go:89] found id: ""
	I1124 09:54:34.408635 1849924 logs.go:282] 0 containers: []
	W1124 09:54:34.408653 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:34.408661 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:34.408673 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:34.473355 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:34.464733   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.465633   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467376   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.467935   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:34.469619   11900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:34.473365 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:34.473376 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:34.560903 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:34.560924 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:34.589722 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:34.589738 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:34.659382 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:34.659407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.175501 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:37.187020 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:37.187082 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:37.215497 1849924 cri.go:89] found id: ""
	I1124 09:54:37.215511 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.215518 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:37.215524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:37.215584 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:37.252296 1849924 cri.go:89] found id: ""
	I1124 09:54:37.252310 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.252317 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:37.252323 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:37.252383 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:37.281216 1849924 cri.go:89] found id: ""
	I1124 09:54:37.281230 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.281237 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:37.281242 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:37.281302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:37.307335 1849924 cri.go:89] found id: ""
	I1124 09:54:37.307349 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.307356 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:37.307361 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:37.307435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:37.333186 1849924 cri.go:89] found id: ""
	I1124 09:54:37.333209 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.333217 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:37.333222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:37.333290 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:37.358046 1849924 cri.go:89] found id: ""
	I1124 09:54:37.358060 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.358068 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:37.358074 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:37.358130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:37.388252 1849924 cri.go:89] found id: ""
	I1124 09:54:37.388265 1849924 logs.go:282] 0 containers: []
	W1124 09:54:37.388273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:37.388280 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:37.388291 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:37.423715 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:37.423740 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:37.490800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:37.490819 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:37.506370 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:37.506387 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:37.571587 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:37.563592   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.564337   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.565866   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.566271   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:37.567836   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:37.571597 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:37.571608 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.152603 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:40.164138 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:40.164210 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:40.192566 1849924 cri.go:89] found id: ""
	I1124 09:54:40.192581 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.192589 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:40.192594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:40.192677 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:40.233587 1849924 cri.go:89] found id: ""
	I1124 09:54:40.233616 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.233623 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:40.233628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:40.233702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:40.268152 1849924 cri.go:89] found id: ""
	I1124 09:54:40.268166 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.268173 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:40.268178 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:40.268258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:40.297572 1849924 cri.go:89] found id: ""
	I1124 09:54:40.297586 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.297593 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:40.297605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:40.297666 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:40.328480 1849924 cri.go:89] found id: ""
	I1124 09:54:40.328502 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.328511 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:40.328517 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:40.328583 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:40.354088 1849924 cri.go:89] found id: ""
	I1124 09:54:40.354102 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.354108 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:40.354114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:40.354172 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:40.384758 1849924 cri.go:89] found id: ""
	I1124 09:54:40.384772 1849924 logs.go:282] 0 containers: []
	W1124 09:54:40.384779 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:40.384786 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:40.384797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:40.452137 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:40.452157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:40.467741 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:40.467757 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:40.535224 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:40.527063   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.527655   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.529357   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.530064   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:40.531703   12118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:40.535235 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:40.535246 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:40.615981 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:40.616005 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:43.148076 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:43.158106 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:43.158169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:43.182985 1849924 cri.go:89] found id: ""
	I1124 09:54:43.182999 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.183006 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:43.183012 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:43.183068 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:43.215806 1849924 cri.go:89] found id: ""
	I1124 09:54:43.215820 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.215837 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:43.215844 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:43.215903 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:43.244278 1849924 cri.go:89] found id: ""
	I1124 09:54:43.244301 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.244309 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:43.244314 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:43.244385 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:43.272908 1849924 cri.go:89] found id: ""
	I1124 09:54:43.272931 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.272938 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:43.272949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:43.273029 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:43.297907 1849924 cri.go:89] found id: ""
	I1124 09:54:43.297921 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.297927 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:43.297933 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:43.298008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:43.330376 1849924 cri.go:89] found id: ""
	I1124 09:54:43.330391 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.330397 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:43.330403 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:43.330459 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:43.359850 1849924 cri.go:89] found id: ""
	I1124 09:54:43.359864 1849924 logs.go:282] 0 containers: []
	W1124 09:54:43.359871 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:43.359879 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:43.359898 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:43.426992 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:43.427012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:43.441799 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:43.441816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:43.504072 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:43.496045   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.496727   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498363   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.498902   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:43.500429   12220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:43.504082 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:43.504093 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:43.585362 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:43.585390 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.114191 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:46.124223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:46.124285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:46.151013 1849924 cri.go:89] found id: ""
	I1124 09:54:46.151027 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.151034 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:46.151039 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:46.151096 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:46.177170 1849924 cri.go:89] found id: ""
	I1124 09:54:46.177184 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.177191 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:46.177196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:46.177258 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:46.205800 1849924 cri.go:89] found id: ""
	I1124 09:54:46.205814 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.205822 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:46.205828 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:46.205893 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:46.239665 1849924 cri.go:89] found id: ""
	I1124 09:54:46.239689 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.239697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:46.239702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:46.239782 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:46.274455 1849924 cri.go:89] found id: ""
	I1124 09:54:46.274480 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.274488 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:46.274494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:46.274574 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:46.300659 1849924 cri.go:89] found id: ""
	I1124 09:54:46.300673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.300680 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:46.300686 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:46.300760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:46.326694 1849924 cri.go:89] found id: ""
	I1124 09:54:46.326708 1849924 logs.go:282] 0 containers: []
	W1124 09:54:46.326715 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:46.326723 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:46.326735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:46.389430 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:46.381041   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.382222   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.383478   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.384057   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:46.385835   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:46.389441 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:46.389452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:46.467187 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:46.467207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:46.499873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:46.499889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:46.574600 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:46.574626 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.092671 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:49.102878 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:49.102942 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:49.130409 1849924 cri.go:89] found id: ""
	I1124 09:54:49.130431 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.130439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:49.130445 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:49.130508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:49.156861 1849924 cri.go:89] found id: ""
	I1124 09:54:49.156874 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.156891 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:49.156897 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:49.156964 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:49.183346 1849924 cri.go:89] found id: ""
	I1124 09:54:49.183369 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.183376 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:49.183382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:49.183442 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:49.217035 1849924 cri.go:89] found id: ""
	I1124 09:54:49.217049 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.217056 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:49.217062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:49.217146 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:49.245694 1849924 cri.go:89] found id: ""
	I1124 09:54:49.245713 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.245720 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:49.245726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:49.245891 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:49.284969 1849924 cri.go:89] found id: ""
	I1124 09:54:49.284983 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.284990 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:49.284995 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:49.285055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:49.314521 1849924 cri.go:89] found id: ""
	I1124 09:54:49.314535 1849924 logs.go:282] 0 containers: []
	W1124 09:54:49.314542 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:49.314549 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:49.314560 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:49.398958 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:49.398979 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:49.428494 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:49.428511 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:49.497701 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:49.497725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:49.513336 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:49.513352 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:49.581585 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:49.573598   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.574416   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576067   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.576394   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:49.577752   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.081862 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:52.092629 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:52.092692 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:52.124453 1849924 cri.go:89] found id: ""
	I1124 09:54:52.124475 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.124482 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:52.124488 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:52.124546 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:52.151758 1849924 cri.go:89] found id: ""
	I1124 09:54:52.151771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.151778 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:52.151784 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:52.151844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:52.176757 1849924 cri.go:89] found id: ""
	I1124 09:54:52.176771 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.176778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:52.176783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:52.176846 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:52.201940 1849924 cri.go:89] found id: ""
	I1124 09:54:52.201954 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.201961 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:52.201967 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:52.202025 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:52.248612 1849924 cri.go:89] found id: ""
	I1124 09:54:52.248625 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.248632 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:52.248638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:52.248713 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:52.279382 1849924 cri.go:89] found id: ""
	I1124 09:54:52.279396 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.279404 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:52.279409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:52.279471 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:52.308695 1849924 cri.go:89] found id: ""
	I1124 09:54:52.308709 1849924 logs.go:282] 0 containers: []
	W1124 09:54:52.308717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:52.308724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:52.308735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:52.376027 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:52.376050 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:52.391327 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:52.391343 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:52.459367 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:52.451062   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.451780   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.453572   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.454231   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:52.455590   12534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:52.459377 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:52.459389 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:52.535870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:52.535893 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:55.066284 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:55.077139 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:55.077203 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:55.105400 1849924 cri.go:89] found id: ""
	I1124 09:54:55.105498 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.105506 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:55.105512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:55.105620 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:55.136637 1849924 cri.go:89] found id: ""
	I1124 09:54:55.136651 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.136659 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:55.136664 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:55.136729 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:55.164659 1849924 cri.go:89] found id: ""
	I1124 09:54:55.164673 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.164680 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:55.164685 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:55.164749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:55.190091 1849924 cri.go:89] found id: ""
	I1124 09:54:55.190117 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.190124 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:55.190129 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:55.190191 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:55.224336 1849924 cri.go:89] found id: ""
	I1124 09:54:55.224351 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.224358 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:55.224363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:55.224424 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:55.259735 1849924 cri.go:89] found id: ""
	I1124 09:54:55.259748 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.259755 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:55.259761 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:55.259821 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:55.290052 1849924 cri.go:89] found id: ""
	I1124 09:54:55.290065 1849924 logs.go:282] 0 containers: []
	W1124 09:54:55.290072 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:55.290079 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:55.290090 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:55.355938 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:55.355957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:55.371501 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:55.371518 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:55.437126 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:55.429218   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.429925   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431446   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.431899   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:55.433433   12640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:55.437140 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:55.437152 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:54:55.515834 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:55.515854 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.048421 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:54:58.059495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:54:58.059560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:54:58.087204 1849924 cri.go:89] found id: ""
	I1124 09:54:58.087219 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.087226 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:54:58.087232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:54:58.087292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:54:58.118248 1849924 cri.go:89] found id: ""
	I1124 09:54:58.118262 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.118270 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:54:58.118276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:54:58.118336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:54:58.144878 1849924 cri.go:89] found id: ""
	I1124 09:54:58.144892 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.144899 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:54:58.144905 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:54:58.144963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:54:58.171781 1849924 cri.go:89] found id: ""
	I1124 09:54:58.171795 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.171814 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:54:58.171820 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:54:58.171898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:54:58.200885 1849924 cri.go:89] found id: ""
	I1124 09:54:58.200907 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.200915 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:54:58.200920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:54:58.200993 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:54:58.231674 1849924 cri.go:89] found id: ""
	I1124 09:54:58.231688 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.231695 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:54:58.231718 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:54:58.231792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:54:58.266664 1849924 cri.go:89] found id: ""
	I1124 09:54:58.266679 1849924 logs.go:282] 0 containers: []
	W1124 09:54:58.266686 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:54:58.266694 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:54:58.266705 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:54:58.300806 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:54:58.300822 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:54:58.367929 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:54:58.367949 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:54:58.383950 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:54:58.383967 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:54:58.449243 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:54:58.441179   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.441862   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.443562   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.444043   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:54:58.445570   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:54:58.449254 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:54:58.449279 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:01.029569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:01.040150 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:01.040231 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:01.067942 1849924 cri.go:89] found id: ""
	I1124 09:55:01.067955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.067962 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:01.067968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:01.068031 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:01.095348 1849924 cri.go:89] found id: ""
	I1124 09:55:01.095362 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.095369 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:01.095375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:01.095436 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:01.125781 1849924 cri.go:89] found id: ""
	I1124 09:55:01.125795 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.125803 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:01.125808 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:01.125871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:01.153546 1849924 cri.go:89] found id: ""
	I1124 09:55:01.153561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.153568 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:01.153575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:01.153643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:01.183965 1849924 cri.go:89] found id: ""
	I1124 09:55:01.183980 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.183987 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:01.183993 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:01.184055 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:01.218518 1849924 cri.go:89] found id: ""
	I1124 09:55:01.218533 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.218541 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:01.218548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:01.218628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:01.255226 1849924 cri.go:89] found id: ""
	I1124 09:55:01.255241 1849924 logs.go:282] 0 containers: []
	W1124 09:55:01.255248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:01.255255 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:01.255266 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:01.290705 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:01.290723 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:01.362275 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:01.362296 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:01.378338 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:01.378357 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:01.447338 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:01.439114   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.439836   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.441485   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.442035   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:01.443658   12863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:01.447348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:01.447359 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.029431 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:04.039677 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:04.039753 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:04.064938 1849924 cri.go:89] found id: ""
	I1124 09:55:04.064952 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.064968 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:04.064975 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:04.065032 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:04.091065 1849924 cri.go:89] found id: ""
	I1124 09:55:04.091079 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.091087 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:04.091092 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:04.091155 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:04.119888 1849924 cri.go:89] found id: ""
	I1124 09:55:04.119902 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.119910 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:04.119915 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:04.119990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:04.145893 1849924 cri.go:89] found id: ""
	I1124 09:55:04.145907 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.145914 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:04.145920 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:04.145981 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:04.172668 1849924 cri.go:89] found id: ""
	I1124 09:55:04.172682 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.172689 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:04.172695 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:04.172770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:04.199546 1849924 cri.go:89] found id: ""
	I1124 09:55:04.199559 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.199576 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:04.199582 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:04.199654 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:04.233837 1849924 cri.go:89] found id: ""
	I1124 09:55:04.233850 1849924 logs.go:282] 0 containers: []
	W1124 09:55:04.233857 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:04.233865 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:04.233875 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:04.312846 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:04.312868 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:04.328376 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:04.328393 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:04.392893 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:04.385148   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.385738   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387356   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.387802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:04.389403   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:04.392903 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:04.392914 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:04.474469 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:04.474497 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.002775 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:07.014668 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:07.014734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:07.041533 1849924 cri.go:89] found id: ""
	I1124 09:55:07.041549 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.041556 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:07.041563 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:07.041628 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:07.071414 1849924 cri.go:89] found id: ""
	I1124 09:55:07.071429 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.071436 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:07.071442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:07.071500 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:07.102622 1849924 cri.go:89] found id: ""
	I1124 09:55:07.102637 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.102644 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:07.102650 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:07.102708 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:07.127684 1849924 cri.go:89] found id: ""
	I1124 09:55:07.127713 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.127720 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:07.127726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:07.127792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:07.153696 1849924 cri.go:89] found id: ""
	I1124 09:55:07.153710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.153718 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:07.153724 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:07.153785 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:07.186158 1849924 cri.go:89] found id: ""
	I1124 09:55:07.186180 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.186187 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:07.186193 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:07.186252 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:07.217520 1849924 cri.go:89] found id: ""
	I1124 09:55:07.217554 1849924 logs.go:282] 0 containers: []
	W1124 09:55:07.217562 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:07.217570 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:07.217580 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:07.247265 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:07.247288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:07.320517 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:07.320537 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:07.336358 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:07.336373 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:07.403281 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:07.394729   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.395524   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397084   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.397809   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:07.399514   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:07.403292 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:07.403302 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:09.981463 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:09.992128 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:09.992195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:10.021174 1849924 cri.go:89] found id: ""
	I1124 09:55:10.021189 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.021197 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:10.021203 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:10.021267 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:10.049180 1849924 cri.go:89] found id: ""
	I1124 09:55:10.049194 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.049202 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:10.049207 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:10.049270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:10.078645 1849924 cri.go:89] found id: ""
	I1124 09:55:10.078660 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.078667 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:10.078673 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:10.078734 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:10.106290 1849924 cri.go:89] found id: ""
	I1124 09:55:10.106304 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.106312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:10.106318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:10.106390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:10.133401 1849924 cri.go:89] found id: ""
	I1124 09:55:10.133455 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.133462 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:10.133468 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:10.133544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:10.162805 1849924 cri.go:89] found id: ""
	I1124 09:55:10.162820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.162827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:10.162833 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:10.162890 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:10.189156 1849924 cri.go:89] found id: ""
	I1124 09:55:10.189170 1849924 logs.go:282] 0 containers: []
	W1124 09:55:10.189177 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:10.189185 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:10.189206 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:10.280238 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:10.272369   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.272932   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.274613   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.275093   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:10.276666   13154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:10.280247 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:10.280258 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:10.359007 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:10.359031 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:10.395999 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:10.396024 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:10.462661 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:10.462683 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:12.979323 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:12.989228 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:12.989300 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:13.016908 1849924 cri.go:89] found id: ""
	I1124 09:55:13.016922 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.016929 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:13.016935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:13.016998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:13.044445 1849924 cri.go:89] found id: ""
	I1124 09:55:13.044467 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.044474 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:13.044480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:13.044547 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:13.070357 1849924 cri.go:89] found id: ""
	I1124 09:55:13.070379 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.070387 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:13.070392 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:13.070461 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:13.098253 1849924 cri.go:89] found id: ""
	I1124 09:55:13.098267 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.098274 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:13.098280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:13.098339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:13.124183 1849924 cri.go:89] found id: ""
	I1124 09:55:13.124196 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.124203 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:13.124209 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:13.124269 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:13.150521 1849924 cri.go:89] found id: ""
	I1124 09:55:13.150536 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.150543 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:13.150549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:13.150619 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:13.181696 1849924 cri.go:89] found id: ""
	I1124 09:55:13.181710 1849924 logs.go:282] 0 containers: []
	W1124 09:55:13.181717 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:13.181724 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:13.181735 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:13.250758 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:13.250778 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:13.271249 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:13.271264 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:13.332213 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:13.324102   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.324686   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326466   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.326912   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:13.328560   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:13.332223 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:13.332235 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:13.409269 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:13.409293 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:15.940893 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:15.951127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:15.951201 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:15.976744 1849924 cri.go:89] found id: ""
	I1124 09:55:15.976767 1849924 logs.go:282] 0 containers: []
	W1124 09:55:15.976774 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:15.976780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:15.976848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:16.005218 1849924 cri.go:89] found id: ""
	I1124 09:55:16.005235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.005245 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:16.005251 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:16.005336 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:16.036862 1849924 cri.go:89] found id: ""
	I1124 09:55:16.036888 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.036896 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:16.036902 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:16.036990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:16.063354 1849924 cri.go:89] found id: ""
	I1124 09:55:16.063369 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.063376 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:16.063382 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:16.063455 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:16.092197 1849924 cri.go:89] found id: ""
	I1124 09:55:16.092211 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.092218 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:16.092224 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:16.092286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:16.117617 1849924 cri.go:89] found id: ""
	I1124 09:55:16.117631 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.117639 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:16.117644 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:16.117702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:16.143200 1849924 cri.go:89] found id: ""
	I1124 09:55:16.143214 1849924 logs.go:282] 0 containers: []
	W1124 09:55:16.143220 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:16.143228 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:16.143239 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:16.171873 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:16.171889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:16.247500 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:16.247519 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:16.267064 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:16.267080 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:16.337347 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:16.328856   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.329515   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331196   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.331750   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:16.333605   13393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:16.337357 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:16.337368 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:18.916700 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:18.927603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:18.927697 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:18.958633 1849924 cri.go:89] found id: ""
	I1124 09:55:18.958649 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.958656 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:18.958662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:18.958725 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:18.988567 1849924 cri.go:89] found id: ""
	I1124 09:55:18.988582 1849924 logs.go:282] 0 containers: []
	W1124 09:55:18.988589 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:18.988594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:18.988665 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:19.016972 1849924 cri.go:89] found id: ""
	I1124 09:55:19.016986 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.016993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:19.016999 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:19.017058 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:19.042806 1849924 cri.go:89] found id: ""
	I1124 09:55:19.042827 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.042835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:19.042841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:19.042905 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:19.073274 1849924 cri.go:89] found id: ""
	I1124 09:55:19.073288 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.073296 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:19.073301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:19.073368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:19.099687 1849924 cri.go:89] found id: ""
	I1124 09:55:19.099701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.099708 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:19.099714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:19.099780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:19.126512 1849924 cri.go:89] found id: ""
	I1124 09:55:19.126526 1849924 logs.go:282] 0 containers: []
	W1124 09:55:19.126532 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:19.126540 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:19.126550 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:19.194410 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:19.194430 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:19.216505 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:19.216527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:19.291566 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:19.282006   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.282582   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.284640   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.285443   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:19.286785   13486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:19.291578 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:19.291591 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:19.371192 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:19.371213 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:21.902356 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:21.912405 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:21.912468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:21.937243 1849924 cri.go:89] found id: ""
	I1124 09:55:21.937256 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.937270 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:21.937276 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:21.937335 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:21.963054 1849924 cri.go:89] found id: ""
	I1124 09:55:21.963068 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.963075 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:21.963080 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:21.963136 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:21.988695 1849924 cri.go:89] found id: ""
	I1124 09:55:21.988708 1849924 logs.go:282] 0 containers: []
	W1124 09:55:21.988715 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:21.988722 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:21.988780 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:22.015029 1849924 cri.go:89] found id: ""
	I1124 09:55:22.015043 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.015050 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:22.015056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:22.015117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:22.044828 1849924 cri.go:89] found id: ""
	I1124 09:55:22.044843 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.044851 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:22.044857 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:22.044919 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:22.071875 1849924 cri.go:89] found id: ""
	I1124 09:55:22.071889 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.071897 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:22.071903 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:22.071970 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:22.099237 1849924 cri.go:89] found id: ""
	I1124 09:55:22.099252 1849924 logs.go:282] 0 containers: []
	W1124 09:55:22.099259 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:22.099267 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:22.099278 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:22.170156 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:22.170176 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:22.185271 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:22.185288 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:22.271963 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:22.260541   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.261399   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263167   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.263474   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:22.264951   13587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:22.271973 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:22.271984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:22.349426 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:22.349447 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:24.878185 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:24.888725 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:24.888800 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:24.915846 1849924 cri.go:89] found id: ""
	I1124 09:55:24.915860 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.915867 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:24.915872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:24.915931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:24.944104 1849924 cri.go:89] found id: ""
	I1124 09:55:24.944118 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.944125 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:24.944131 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:24.944196 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:24.970424 1849924 cri.go:89] found id: ""
	I1124 09:55:24.970438 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.970445 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:24.970450 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:24.970511 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:24.999941 1849924 cri.go:89] found id: ""
	I1124 09:55:24.999955 1849924 logs.go:282] 0 containers: []
	W1124 09:55:24.999962 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:24.999968 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:25.000027 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:25.030682 1849924 cri.go:89] found id: ""
	I1124 09:55:25.030700 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.030707 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:25.030714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:25.030788 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:25.061169 1849924 cri.go:89] found id: ""
	I1124 09:55:25.061183 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.061191 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:25.061196 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:25.061262 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:25.092046 1849924 cri.go:89] found id: ""
	I1124 09:55:25.092061 1849924 logs.go:282] 0 containers: []
	W1124 09:55:25.092069 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:25.092078 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:25.092089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:25.164204 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:25.164229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:25.180461 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:25.180477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:25.270104 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:25.258264   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.259071   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.260722   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.261322   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:25.262899   13694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:25.270114 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:25.270125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:25.349962 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:25.349985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:27.885869 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:27.895923 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:27.895990 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:27.923576 1849924 cri.go:89] found id: ""
	I1124 09:55:27.923591 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.923598 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:27.923604 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:27.923660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:27.949384 1849924 cri.go:89] found id: ""
	I1124 09:55:27.949398 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.949405 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:27.949409 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:27.949468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:27.974662 1849924 cri.go:89] found id: ""
	I1124 09:55:27.974675 1849924 logs.go:282] 0 containers: []
	W1124 09:55:27.974682 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:27.974687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:27.974752 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:28.000014 1849924 cri.go:89] found id: ""
	I1124 09:55:28.000028 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.000035 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:28.000041 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:28.000113 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:28.031383 1849924 cri.go:89] found id: ""
	I1124 09:55:28.031397 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.031404 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:28.031410 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:28.031468 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:28.062851 1849924 cri.go:89] found id: ""
	I1124 09:55:28.062872 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.062880 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:28.062886 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:28.062965 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:28.091592 1849924 cri.go:89] found id: ""
	I1124 09:55:28.091608 1849924 logs.go:282] 0 containers: []
	W1124 09:55:28.091623 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:28.091633 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:28.091646 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:28.125018 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:28.125035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:28.190729 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:28.190751 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:28.205665 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:28.205681 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:28.285905 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:28.277793   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.278488   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280142   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.280727   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:28.282341   13810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:28.285917 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:28.285927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:30.864245 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:30.876164 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:30.876248 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:30.901572 1849924 cri.go:89] found id: ""
	I1124 09:55:30.901586 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.901593 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:30.901599 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:30.901659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:30.931361 1849924 cri.go:89] found id: ""
	I1124 09:55:30.931374 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.931382 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:30.931388 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:30.931449 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:30.956087 1849924 cri.go:89] found id: ""
	I1124 09:55:30.956101 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.956108 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:30.956114 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:30.956174 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:30.981912 1849924 cri.go:89] found id: ""
	I1124 09:55:30.981925 1849924 logs.go:282] 0 containers: []
	W1124 09:55:30.981933 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:30.981938 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:30.982013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:31.010764 1849924 cri.go:89] found id: ""
	I1124 09:55:31.010778 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.010804 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:31.010811 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:31.010884 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:31.037094 1849924 cri.go:89] found id: ""
	I1124 09:55:31.037140 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.037146 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:31.037153 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:31.037221 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:31.064060 1849924 cri.go:89] found id: ""
	I1124 09:55:31.064075 1849924 logs.go:282] 0 containers: []
	W1124 09:55:31.064092 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:31.064100 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:31.064111 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:31.129432 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:31.120323   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.121052   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.122830   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.123442   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:31.125195   13894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:31.129444 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:31.129455 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:31.207603 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:31.207622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:31.246019 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:31.246035 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:31.313859 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:31.313882 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:33.829785 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:33.839749 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:33.839813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:33.864810 1849924 cri.go:89] found id: ""
	I1124 09:55:33.864824 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.864831 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:33.864837 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:33.864898 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:33.890309 1849924 cri.go:89] found id: ""
	I1124 09:55:33.890324 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.890331 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:33.890336 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:33.890401 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:33.922386 1849924 cri.go:89] found id: ""
	I1124 09:55:33.922399 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.922406 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:33.922412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:33.922473 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:33.947199 1849924 cri.go:89] found id: ""
	I1124 09:55:33.947213 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.947220 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:33.947226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:33.947289 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:33.972195 1849924 cri.go:89] found id: ""
	I1124 09:55:33.972209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.972216 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:33.972222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:33.972294 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:33.997877 1849924 cri.go:89] found id: ""
	I1124 09:55:33.997891 1849924 logs.go:282] 0 containers: []
	W1124 09:55:33.997898 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:33.997904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:33.997961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:34.024719 1849924 cri.go:89] found id: ""
	I1124 09:55:34.024733 1849924 logs.go:282] 0 containers: []
	W1124 09:55:34.024741 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:34.024748 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:34.024769 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:34.089874 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:34.089896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:34.104839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:34.104857 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:34.171681 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:34.163530   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.164246   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.165933   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.166487   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:34.168012   14004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:34.171691 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:34.171702 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:34.249876 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:34.249896 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:36.781512 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:36.791518 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:36.791579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:36.820485 1849924 cri.go:89] found id: ""
	I1124 09:55:36.820500 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.820508 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:36.820514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:36.820589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:36.845963 1849924 cri.go:89] found id: ""
	I1124 09:55:36.845978 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.845985 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:36.845991 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:36.846062 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:36.880558 1849924 cri.go:89] found id: ""
	I1124 09:55:36.880573 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.880580 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:36.880586 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:36.880656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:36.908730 1849924 cri.go:89] found id: ""
	I1124 09:55:36.908745 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.908752 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:36.908769 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:36.908830 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:36.936618 1849924 cri.go:89] found id: ""
	I1124 09:55:36.936634 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.936646 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:36.936662 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:36.936724 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:36.961091 1849924 cri.go:89] found id: ""
	I1124 09:55:36.961134 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.961142 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:36.961148 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:36.961215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:36.986263 1849924 cri.go:89] found id: ""
	I1124 09:55:36.986278 1849924 logs.go:282] 0 containers: []
	W1124 09:55:36.986285 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:36.986293 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:36.986304 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:37.061090 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:37.061120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:37.076634 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:37.076652 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:37.144407 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:37.135665   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.136346   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138043   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.138472   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:37.140069   14108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:37.144417 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:37.144427 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:37.223887 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:37.223907 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:39.759307 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:39.769265 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:39.769325 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:39.795092 1849924 cri.go:89] found id: ""
	I1124 09:55:39.795107 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.795114 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:39.795120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:39.795180 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:39.821381 1849924 cri.go:89] found id: ""
	I1124 09:55:39.821396 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.821403 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:39.821408 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:39.821480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:39.850195 1849924 cri.go:89] found id: ""
	I1124 09:55:39.850209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.850224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:39.850232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:39.850291 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:39.875376 1849924 cri.go:89] found id: ""
	I1124 09:55:39.875391 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.875398 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:39.875404 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:39.875466 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:39.904124 1849924 cri.go:89] found id: ""
	I1124 09:55:39.904138 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.904146 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:39.904151 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:39.904222 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:39.930807 1849924 cri.go:89] found id: ""
	I1124 09:55:39.930820 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.930827 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:39.930832 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:39.930889 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:39.960435 1849924 cri.go:89] found id: ""
	I1124 09:55:39.960449 1849924 logs.go:282] 0 containers: []
	W1124 09:55:39.960456 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:39.960464 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:39.960475 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:40.030261 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:40.021301   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.021882   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.023683   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.024501   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:40.026444   14206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:40.030271 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:40.030283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:40.109590 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:40.109615 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:40.143688 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:40.143704 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:40.212394 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:40.212412 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:42.734304 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:42.744432 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:42.744494 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:42.769686 1849924 cri.go:89] found id: ""
	I1124 09:55:42.769701 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.769708 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:42.769714 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:42.769774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:42.794368 1849924 cri.go:89] found id: ""
	I1124 09:55:42.794381 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.794388 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:42.794394 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:42.794460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:42.819036 1849924 cri.go:89] found id: ""
	I1124 09:55:42.819051 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.819058 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:42.819067 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:42.819126 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:42.845429 1849924 cri.go:89] found id: ""
	I1124 09:55:42.845444 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.845452 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:42.845457 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:42.845516 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:42.873391 1849924 cri.go:89] found id: ""
	I1124 09:55:42.873405 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.873412 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:42.873418 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:42.873483 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:42.899532 1849924 cri.go:89] found id: ""
	I1124 09:55:42.899560 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.899567 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:42.899575 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:42.899642 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:42.925159 1849924 cri.go:89] found id: ""
	I1124 09:55:42.925173 1849924 logs.go:282] 0 containers: []
	W1124 09:55:42.925180 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:42.925188 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:42.925215 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:43.003079 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:43.003104 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:43.041964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:43.041990 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:43.120202 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:43.120224 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:43.143097 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:43.143191 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:43.219616 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:43.210956   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.211349   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.213087   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.214022   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:43.215815   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:45.719895 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:45.730306 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:45.730370 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:45.755318 1849924 cri.go:89] found id: ""
	I1124 09:55:45.755333 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.755341 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:45.755353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:45.755413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:45.781283 1849924 cri.go:89] found id: ""
	I1124 09:55:45.781299 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.781305 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:45.781311 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:45.781369 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:45.807468 1849924 cri.go:89] found id: ""
	I1124 09:55:45.807482 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.807489 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:45.807495 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:45.807554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:45.836726 1849924 cri.go:89] found id: ""
	I1124 09:55:45.836741 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.836749 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:45.836754 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:45.836813 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:45.862613 1849924 cri.go:89] found id: ""
	I1124 09:55:45.862628 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.862635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:45.862641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:45.862702 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:45.894972 1849924 cri.go:89] found id: ""
	I1124 09:55:45.894987 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.894994 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:45.895000 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:45.895067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:45.922194 1849924 cri.go:89] found id: ""
	I1124 09:55:45.922209 1849924 logs.go:282] 0 containers: []
	W1124 09:55:45.922217 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:45.922224 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:45.922237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:45.954912 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:45.954930 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:46.021984 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:46.022004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:46.037849 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:46.037865 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:46.101460 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:46.094220   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.094591   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096148   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.096453   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:46.097881   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:46.101473 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:46.101483 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:48.688081 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:48.698194 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:48.698260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:48.724390 1849924 cri.go:89] found id: ""
	I1124 09:55:48.724404 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.724411 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:48.724416 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:48.724480 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:48.749323 1849924 cri.go:89] found id: ""
	I1124 09:55:48.749337 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.749344 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:48.749350 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:48.749406 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:48.774542 1849924 cri.go:89] found id: ""
	I1124 09:55:48.774555 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.774562 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:48.774569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:48.774635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:48.799553 1849924 cri.go:89] found id: ""
	I1124 09:55:48.799568 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.799575 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:48.799580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:48.799637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:48.824768 1849924 cri.go:89] found id: ""
	I1124 09:55:48.824782 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.824789 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:48.824794 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:48.824849 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:48.853654 1849924 cri.go:89] found id: ""
	I1124 09:55:48.853668 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.853674 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:48.853680 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:48.853738 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:48.880137 1849924 cri.go:89] found id: ""
	I1124 09:55:48.880151 1849924 logs.go:282] 0 containers: []
	W1124 09:55:48.880158 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:48.880166 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:48.880178 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:48.943985 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:48.935560   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.936223   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.937954   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.938523   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:48.940303   14519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:48.943998 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:48.944008 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:49.021387 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:49.021407 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:49.054551 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:49.054566 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:49.124670 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:49.124690 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.640001 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:51.650264 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:51.650326 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:51.675421 1849924 cri.go:89] found id: ""
	I1124 09:55:51.675434 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.675442 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:51.675447 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:51.675510 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:51.703552 1849924 cri.go:89] found id: ""
	I1124 09:55:51.703566 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.703573 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:51.703578 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:51.703637 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:51.731457 1849924 cri.go:89] found id: ""
	I1124 09:55:51.731470 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.731477 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:51.731483 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:51.731540 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:51.757515 1849924 cri.go:89] found id: ""
	I1124 09:55:51.757529 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.757536 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:51.757541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:51.757604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:51.787493 1849924 cri.go:89] found id: ""
	I1124 09:55:51.787507 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.787514 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:51.787520 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:51.787579 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:51.813153 1849924 cri.go:89] found id: ""
	I1124 09:55:51.813166 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.813173 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:51.813179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:51.813250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:51.845222 1849924 cri.go:89] found id: ""
	I1124 09:55:51.845235 1849924 logs.go:282] 0 containers: []
	W1124 09:55:51.845244 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:51.845252 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:51.845272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:51.860214 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:51.860236 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:51.924176 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:51.916718   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.917256   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.918768   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.919157   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:51.920614   14628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:51.924186 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:51.924196 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:52.001608 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:52.001629 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:52.037448 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:52.037466 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.609480 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:54.620161 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:54.620223 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:54.649789 1849924 cri.go:89] found id: ""
	I1124 09:55:54.649803 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.649810 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:54.649816 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:54.649879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:54.677548 1849924 cri.go:89] found id: ""
	I1124 09:55:54.677561 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.677568 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:54.677573 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:54.677635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:54.707602 1849924 cri.go:89] found id: ""
	I1124 09:55:54.707616 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.707623 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:54.707628 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:54.707687 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:54.737369 1849924 cri.go:89] found id: ""
	I1124 09:55:54.737382 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.737390 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:54.737396 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:54.737460 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:54.764514 1849924 cri.go:89] found id: ""
	I1124 09:55:54.764528 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.764536 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:54.764541 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:54.764599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:54.789898 1849924 cri.go:89] found id: ""
	I1124 09:55:54.789912 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.789920 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:54.789925 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:54.789986 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:54.815652 1849924 cri.go:89] found id: ""
	I1124 09:55:54.815665 1849924 logs.go:282] 0 containers: []
	W1124 09:55:54.815672 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:54.815681 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:54.815691 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:55:54.882879 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:54.882901 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:54.898593 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:54.898622 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:54.967134 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:54.958943   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.959692   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961447   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.961795   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:54.963010   14738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:54.967146 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:54.967157 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:55.046870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:55.046891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.578091 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:55:57.588580 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:55:57.588643 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:55:57.617411 1849924 cri.go:89] found id: ""
	I1124 09:55:57.617425 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.617432 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:55:57.617437 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:55:57.617503 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:55:57.642763 1849924 cri.go:89] found id: ""
	I1124 09:55:57.642777 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.642784 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:55:57.642789 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:55:57.642848 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:55:57.668484 1849924 cri.go:89] found id: ""
	I1124 09:55:57.668499 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.668506 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:55:57.668512 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:55:57.668571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:55:57.694643 1849924 cri.go:89] found id: ""
	I1124 09:55:57.694657 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.694664 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:55:57.694670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:55:57.694730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:55:57.720049 1849924 cri.go:89] found id: ""
	I1124 09:55:57.720063 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.720070 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:55:57.720075 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:55:57.720140 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:55:57.748016 1849924 cri.go:89] found id: ""
	I1124 09:55:57.748029 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.748036 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:55:57.748044 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:55:57.748104 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:55:57.774253 1849924 cri.go:89] found id: ""
	I1124 09:55:57.774266 1849924 logs.go:282] 0 containers: []
	W1124 09:55:57.774273 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:55:57.774281 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:55:57.774295 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:55:57.789236 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:55:57.789253 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:55:57.851207 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:55:57.843034   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.843762   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.845507   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.846064   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:55:57.847600   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:55:57.851217 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:55:57.851229 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:55:57.927927 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:55:57.927946 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:55:57.959058 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:55:57.959075 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.529440 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:00.539970 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:00.540034 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:00.566556 1849924 cri.go:89] found id: ""
	I1124 09:56:00.566570 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.566583 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:00.566589 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:00.566659 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:00.596278 1849924 cri.go:89] found id: ""
	I1124 09:56:00.596291 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.596298 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:00.596304 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:00.596362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:00.623580 1849924 cri.go:89] found id: ""
	I1124 09:56:00.623593 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.623600 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:00.623605 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:00.623664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:00.648991 1849924 cri.go:89] found id: ""
	I1124 09:56:00.649006 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.649012 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:00.649018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:00.649078 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:00.676614 1849924 cri.go:89] found id: ""
	I1124 09:56:00.676628 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.676635 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:00.676641 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:00.676706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:00.701480 1849924 cri.go:89] found id: ""
	I1124 09:56:00.701502 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.701509 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:00.701516 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:00.701575 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:00.727550 1849924 cri.go:89] found id: ""
	I1124 09:56:00.727563 1849924 logs.go:282] 0 containers: []
	W1124 09:56:00.727570 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:00.727578 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:00.727589 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:00.755964 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:00.755980 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:00.822018 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:00.822039 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:00.837252 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:00.837268 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:00.901931 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:00.892319   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.893334   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.894177   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.895936   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:00.896356   14956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:00.901942 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:00.901957 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.481859 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:03.493893 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:03.493961 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:03.522628 1849924 cri.go:89] found id: ""
	I1124 09:56:03.522643 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.522650 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:03.522656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:03.522716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:03.551454 1849924 cri.go:89] found id: ""
	I1124 09:56:03.551468 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.551475 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:03.551480 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:03.551539 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:03.580931 1849924 cri.go:89] found id: ""
	I1124 09:56:03.580945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.580951 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:03.580957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:03.581015 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:03.607826 1849924 cri.go:89] found id: ""
	I1124 09:56:03.607840 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.607846 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:03.607852 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:03.607923 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:03.637843 1849924 cri.go:89] found id: ""
	I1124 09:56:03.637857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.637865 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:03.637870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:03.637931 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:03.665156 1849924 cri.go:89] found id: ""
	I1124 09:56:03.665170 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.665176 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:03.665182 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:03.665250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:03.690810 1849924 cri.go:89] found id: ""
	I1124 09:56:03.690824 1849924 logs.go:282] 0 containers: []
	W1124 09:56:03.690831 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:03.690839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:03.690849 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:03.755803 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:03.746112   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.746816   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.748522   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.749036   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:03.752194   15039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:03.755813 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:03.755823 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:03.832793 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:03.832816 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:03.860351 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:03.860367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:03.930446 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:03.930465 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.445925 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:06.457385 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:06.457451 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:06.490931 1849924 cri.go:89] found id: ""
	I1124 09:56:06.490944 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.490951 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:06.490956 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:06.491013 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:06.529326 1849924 cri.go:89] found id: ""
	I1124 09:56:06.529340 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.529347 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:06.529353 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:06.529409 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:06.554888 1849924 cri.go:89] found id: ""
	I1124 09:56:06.554914 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.554921 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:06.554926 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:06.554984 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:06.579750 1849924 cri.go:89] found id: ""
	I1124 09:56:06.579764 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.579771 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:06.579781 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:06.579839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:06.605075 1849924 cri.go:89] found id: ""
	I1124 09:56:06.605098 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.605134 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:06.605140 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:06.605207 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:06.630281 1849924 cri.go:89] found id: ""
	I1124 09:56:06.630295 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.630302 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:06.630307 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:06.630366 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:06.655406 1849924 cri.go:89] found id: ""
	I1124 09:56:06.655427 1849924 logs.go:282] 0 containers: []
	W1124 09:56:06.655435 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:06.655442 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:06.655453 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:06.722316 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:06.722335 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:06.737174 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:06.737190 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:06.801018 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:06.793198   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.793849   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795373   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.795661   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:06.797232   15148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:06.801032 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:06.801042 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:06.882225 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:06.882254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.412996 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:09.423266 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:09.423332 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:09.452270 1849924 cri.go:89] found id: ""
	I1124 09:56:09.452283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.452290 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:09.452295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:09.452353 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:09.484931 1849924 cri.go:89] found id: ""
	I1124 09:56:09.484945 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.484952 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:09.484957 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:09.485030 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:09.526676 1849924 cri.go:89] found id: ""
	I1124 09:56:09.526689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.526696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:09.526701 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:09.526758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:09.551815 1849924 cri.go:89] found id: ""
	I1124 09:56:09.551828 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.551835 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:09.551841 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:09.551904 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:09.580143 1849924 cri.go:89] found id: ""
	I1124 09:56:09.580159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.580167 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:09.580173 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:09.580233 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:09.608255 1849924 cri.go:89] found id: ""
	I1124 09:56:09.608269 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.608276 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:09.608281 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:09.608338 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:09.638262 1849924 cri.go:89] found id: ""
	I1124 09:56:09.638276 1849924 logs.go:282] 0 containers: []
	W1124 09:56:09.638283 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:09.638291 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:09.638301 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:09.713707 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:09.713728 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:09.741202 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:09.741218 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:09.806578 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:09.806598 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:09.821839 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:09.821855 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:09.888815 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:09.880422   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.881210   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.882830   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.883425   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:09.885056   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.390494 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:12.400491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:12.400550 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:12.426496 1849924 cri.go:89] found id: ""
	I1124 09:56:12.426511 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.426517 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:12.426524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:12.426587 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:12.457770 1849924 cri.go:89] found id: ""
	I1124 09:56:12.457794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.457801 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:12.457807 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:12.457873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:12.489154 1849924 cri.go:89] found id: ""
	I1124 09:56:12.489167 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.489174 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:12.489179 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:12.489250 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:12.524997 1849924 cri.go:89] found id: ""
	I1124 09:56:12.525010 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.525018 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:12.525024 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:12.525090 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:12.550538 1849924 cri.go:89] found id: ""
	I1124 09:56:12.550561 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.550569 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:12.550574 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:12.550650 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:12.575990 1849924 cri.go:89] found id: ""
	I1124 09:56:12.576011 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.576018 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:12.576025 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:12.576095 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:12.602083 1849924 cri.go:89] found id: ""
	I1124 09:56:12.602097 1849924 logs.go:282] 0 containers: []
	W1124 09:56:12.602104 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:12.602112 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:12.602125 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:12.667794 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:12.667814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:12.682815 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:12.682832 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:12.749256 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:12.741287   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.741908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.743573   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.744128   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:12.745755   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:12.749266 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:12.749276 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:12.823882 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:12.823902 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.353890 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:15.364319 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:15.364380 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:15.389759 1849924 cri.go:89] found id: ""
	I1124 09:56:15.389772 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.389786 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:15.389792 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:15.389850 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:15.414921 1849924 cri.go:89] found id: ""
	I1124 09:56:15.414936 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.414943 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:15.414948 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:15.415008 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:15.444228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.444242 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.444249 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:15.444254 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:15.444314 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:15.476734 1849924 cri.go:89] found id: ""
	I1124 09:56:15.476747 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.476763 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:15.476768 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:15.476836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:15.507241 1849924 cri.go:89] found id: ""
	I1124 09:56:15.507254 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.507261 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:15.507275 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:15.507339 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:15.544058 1849924 cri.go:89] found id: ""
	I1124 09:56:15.544081 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.544089 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:15.544094 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:15.544162 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:15.571228 1849924 cri.go:89] found id: ""
	I1124 09:56:15.571241 1849924 logs.go:282] 0 containers: []
	W1124 09:56:15.571248 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:15.571261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:15.571272 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:15.646647 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:15.646667 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:15.674311 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:15.674326 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:15.739431 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:15.739451 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:15.754640 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:15.754662 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:15.821471 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:15.813499   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.814169   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.815722   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.816349   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:15.817902   15476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.321745 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:18.331603 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:18.331664 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:18.357195 1849924 cri.go:89] found id: ""
	I1124 09:56:18.357215 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.357223 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:18.357229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:18.357292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:18.387513 1849924 cri.go:89] found id: ""
	I1124 09:56:18.387527 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.387534 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:18.387540 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:18.387600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:18.414561 1849924 cri.go:89] found id: ""
	I1124 09:56:18.414583 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.414590 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:18.414596 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:18.414670 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:18.441543 1849924 cri.go:89] found id: ""
	I1124 09:56:18.441557 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.441564 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:18.441569 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:18.441627 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:18.481911 1849924 cri.go:89] found id: ""
	I1124 09:56:18.481924 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.481931 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:18.481937 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:18.481995 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:18.512577 1849924 cri.go:89] found id: ""
	I1124 09:56:18.512589 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.512596 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:18.512601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:18.512660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:18.542006 1849924 cri.go:89] found id: ""
	I1124 09:56:18.542021 1849924 logs.go:282] 0 containers: []
	W1124 09:56:18.542028 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:18.542035 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:18.542045 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:18.572217 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:18.572233 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:18.637845 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:18.637863 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:18.653892 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:18.653908 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:18.720870 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:18.711123   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.711807   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715048   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.715612   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:18.717360   15575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:18.720881 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:18.720891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.300479 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:21.310612 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:21.310716 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:21.339787 1849924 cri.go:89] found id: ""
	I1124 09:56:21.339801 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.339808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:21.339819 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:21.339879 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:21.364577 1849924 cri.go:89] found id: ""
	I1124 09:56:21.364601 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.364609 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:21.364615 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:21.364688 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:21.391798 1849924 cri.go:89] found id: ""
	I1124 09:56:21.391852 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.391859 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:21.391865 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:21.391939 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:21.417518 1849924 cri.go:89] found id: ""
	I1124 09:56:21.417532 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.417539 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:21.417545 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:21.417600 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:21.443079 1849924 cri.go:89] found id: ""
	I1124 09:56:21.443092 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.443099 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:21.443104 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:21.443164 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:21.483649 1849924 cri.go:89] found id: ""
	I1124 09:56:21.483663 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.483685 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:21.483691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:21.483758 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:21.513352 1849924 cri.go:89] found id: ""
	I1124 09:56:21.513367 1849924 logs.go:282] 0 containers: []
	W1124 09:56:21.513374 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:21.513383 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:21.513445 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:21.583074 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:21.583095 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:21.598415 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:21.598432 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:21.661326 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:21.653065   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.653679   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.655459   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.656094   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:21.657796   15667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:21.661336 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:21.661348 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:21.742506 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:21.742527 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:24.271763 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:24.281983 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:24.282044 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:24.313907 1849924 cri.go:89] found id: ""
	I1124 09:56:24.313920 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.313928 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:24.313934 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:24.314006 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:24.338982 1849924 cri.go:89] found id: ""
	I1124 09:56:24.338996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.339003 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:24.339009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:24.339067 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:24.365195 1849924 cri.go:89] found id: ""
	I1124 09:56:24.365209 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.365216 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:24.365222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:24.365292 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:24.390215 1849924 cri.go:89] found id: ""
	I1124 09:56:24.390228 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.390235 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:24.390241 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:24.390299 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:24.415458 1849924 cri.go:89] found id: ""
	I1124 09:56:24.415472 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.415479 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:24.415484 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:24.415544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:24.442483 1849924 cri.go:89] found id: ""
	I1124 09:56:24.442497 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.442504 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:24.442510 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:24.442571 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:24.478898 1849924 cri.go:89] found id: ""
	I1124 09:56:24.478912 1849924 logs.go:282] 0 containers: []
	W1124 09:56:24.478919 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:24.478926 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:24.478936 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:24.559295 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:24.559320 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:24.575521 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:24.575538 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:24.643962 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:24.634324   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.635404   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637173   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.637623   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:24.639255   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:24.643974 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:24.643985 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:24.721863 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:24.721883 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.252684 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:27.262544 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:27.262604 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:27.288190 1849924 cri.go:89] found id: ""
	I1124 09:56:27.288203 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.288211 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:27.288216 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:27.288276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:27.315955 1849924 cri.go:89] found id: ""
	I1124 09:56:27.315975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.315983 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:27.315988 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:27.316050 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:27.341613 1849924 cri.go:89] found id: ""
	I1124 09:56:27.341626 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.341633 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:27.341639 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:27.341699 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:27.366677 1849924 cri.go:89] found id: ""
	I1124 09:56:27.366690 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.366697 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:27.366703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:27.366768 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:27.392001 1849924 cri.go:89] found id: ""
	I1124 09:56:27.392015 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.392021 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:27.392027 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:27.392085 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:27.419410 1849924 cri.go:89] found id: ""
	I1124 09:56:27.419430 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.419436 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:27.419442 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:27.419501 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:27.444780 1849924 cri.go:89] found id: ""
	I1124 09:56:27.444794 1849924 logs.go:282] 0 containers: []
	W1124 09:56:27.444801 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:27.444809 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:27.444824 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:27.478836 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:27.478853 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:27.552795 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:27.552814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:27.567935 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:27.567988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:27.630838 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:27.623155   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.623775   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625325   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.625806   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:27.627233   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:27.630849 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:27.630859 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:30.212620 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:30.223248 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:30.223313 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:30.249863 1849924 cri.go:89] found id: ""
	I1124 09:56:30.249876 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.249883 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:30.249888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:30.249947 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:30.275941 1849924 cri.go:89] found id: ""
	I1124 09:56:30.275955 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.275974 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:30.275980 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:30.276053 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:30.300914 1849924 cri.go:89] found id: ""
	I1124 09:56:30.300928 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.300944 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:30.300950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:30.301016 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:30.325980 1849924 cri.go:89] found id: ""
	I1124 09:56:30.325994 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.326011 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:30.326018 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:30.326089 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:30.352023 1849924 cri.go:89] found id: ""
	I1124 09:56:30.352038 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.352045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:30.352050 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:30.352121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:30.379711 1849924 cri.go:89] found id: ""
	I1124 09:56:30.379724 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.379731 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:30.379736 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:30.379801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:30.409210 1849924 cri.go:89] found id: ""
	I1124 09:56:30.409224 1849924 logs.go:282] 0 containers: []
	W1124 09:56:30.409232 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:30.409240 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:30.409251 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:30.437995 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:30.438012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:30.507429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:30.507448 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:30.525911 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:30.525927 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:30.589196 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:30.581582   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.582300   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.583474   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.584003   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:30.585632   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:30.589210 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:30.589220 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:33.172621 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:33.182671 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:33.182730 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:33.211695 1849924 cri.go:89] found id: ""
	I1124 09:56:33.211709 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.211716 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:33.211721 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:33.211779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:33.237798 1849924 cri.go:89] found id: ""
	I1124 09:56:33.237811 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.237818 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:33.237824 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:33.237885 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:33.262147 1849924 cri.go:89] found id: ""
	I1124 09:56:33.262160 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.262167 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:33.262172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:33.262230 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:33.286667 1849924 cri.go:89] found id: ""
	I1124 09:56:33.286681 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.286690 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:33.286696 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:33.286754 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:33.311109 1849924 cri.go:89] found id: ""
	I1124 09:56:33.311122 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.311129 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:33.311135 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:33.311198 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:33.336757 1849924 cri.go:89] found id: ""
	I1124 09:56:33.336781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.336790 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:33.336796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:33.336864 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:33.365159 1849924 cri.go:89] found id: ""
	I1124 09:56:33.365172 1849924 logs.go:282] 0 containers: []
	W1124 09:56:33.365179 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:33.365186 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:33.365197 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:33.393002 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:33.393017 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:33.457704 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:33.457724 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:33.473674 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:33.473700 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:33.547251 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:33.539312   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.540185   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.541750   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.542086   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:33.543554   16101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:33.547261 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:33.547274 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.125180 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:36.135549 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:36.135611 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:36.161892 1849924 cri.go:89] found id: ""
	I1124 09:56:36.161906 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.161913 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:36.161919 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:36.161980 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:36.192254 1849924 cri.go:89] found id: ""
	I1124 09:56:36.192268 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.192275 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:36.192280 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:36.192341 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:36.219675 1849924 cri.go:89] found id: ""
	I1124 09:56:36.219689 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.219696 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:36.219702 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:36.219760 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:36.249674 1849924 cri.go:89] found id: ""
	I1124 09:56:36.249688 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.249695 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:36.249700 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:36.249756 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:36.276115 1849924 cri.go:89] found id: ""
	I1124 09:56:36.276129 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.276136 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:36.276141 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:36.276199 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:36.303472 1849924 cri.go:89] found id: ""
	I1124 09:56:36.303486 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.303494 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:36.303499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:36.303558 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:36.332774 1849924 cri.go:89] found id: ""
	I1124 09:56:36.332789 1849924 logs.go:282] 0 containers: []
	W1124 09:56:36.332796 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:36.332804 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:36.332814 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:36.410262 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:36.410282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:36.442608 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:36.442625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:36.517228 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:36.517247 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:36.532442 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:36.532459 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:36.598941 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:36.591038   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.591731   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593289   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.593891   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:36.595477   16210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.099623 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:39.110286 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:39.110347 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:39.135094 1849924 cri.go:89] found id: ""
	I1124 09:56:39.135108 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.135115 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:39.135120 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:39.135184 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:39.161664 1849924 cri.go:89] found id: ""
	I1124 09:56:39.161678 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.161685 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:39.161691 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:39.161749 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:39.186843 1849924 cri.go:89] found id: ""
	I1124 09:56:39.186857 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.186865 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:39.186870 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:39.186930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:39.212864 1849924 cri.go:89] found id: ""
	I1124 09:56:39.212878 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.212889 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:39.212895 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:39.212953 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:39.243329 1849924 cri.go:89] found id: ""
	I1124 09:56:39.243343 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.243350 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:39.243356 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:39.243421 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:39.268862 1849924 cri.go:89] found id: ""
	I1124 09:56:39.268875 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.268883 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:39.268888 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:39.268950 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:39.295966 1849924 cri.go:89] found id: ""
	I1124 09:56:39.295979 1849924 logs.go:282] 0 containers: []
	W1124 09:56:39.295986 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:39.295993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:39.296004 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:39.327310 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:39.327325 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:39.392831 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:39.392850 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:39.407904 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:39.407920 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:39.476692 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:39.468022   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.468696   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470234   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.470747   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:39.472594   16306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:39.476716 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:39.476729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.055953 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:42.067687 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:42.067767 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:42.096948 1849924 cri.go:89] found id: ""
	I1124 09:56:42.096963 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.096971 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:42.096977 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:42.097039 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:42.128766 1849924 cri.go:89] found id: ""
	I1124 09:56:42.128781 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.128789 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:42.128795 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:42.128861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:42.160266 1849924 cri.go:89] found id: ""
	I1124 09:56:42.160283 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.160291 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:42.160297 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:42.160368 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:42.191973 1849924 cri.go:89] found id: ""
	I1124 09:56:42.191996 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.192004 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:42.192011 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:42.192081 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:42.226204 1849924 cri.go:89] found id: ""
	I1124 09:56:42.226218 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.226226 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:42.226232 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:42.226316 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:42.253907 1849924 cri.go:89] found id: ""
	I1124 09:56:42.253922 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.253929 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:42.253935 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:42.253998 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:42.282770 1849924 cri.go:89] found id: ""
	I1124 09:56:42.282786 1849924 logs.go:282] 0 containers: []
	W1124 09:56:42.282793 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:42.282800 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:42.282811 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:42.298712 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:42.298729 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:42.363239 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:42.355539   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.355978   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.357856   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.358221   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:42.359646   16400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:42.363249 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:42.363260 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:42.437643 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:42.437663 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:42.475221 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:42.475237 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:45.048529 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:45.067334 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:45.067432 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:45.099636 1849924 cri.go:89] found id: ""
	I1124 09:56:45.099652 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.099659 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:45.099666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:45.099762 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:45.132659 1849924 cri.go:89] found id: ""
	I1124 09:56:45.132693 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.132701 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:45.132708 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:45.132792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:45.169282 1849924 cri.go:89] found id: ""
	I1124 09:56:45.169306 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.169314 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:45.169320 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:45.169398 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:45.226517 1849924 cri.go:89] found id: ""
	I1124 09:56:45.226533 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.226542 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:45.226548 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:45.226626 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:45.265664 1849924 cri.go:89] found id: ""
	I1124 09:56:45.265680 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.265687 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:45.265693 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:45.265759 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:45.298503 1849924 cri.go:89] found id: ""
	I1124 09:56:45.298517 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.298525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:45.298531 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:45.298599 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:45.329403 1849924 cri.go:89] found id: ""
	I1124 09:56:45.329436 1849924 logs.go:282] 0 containers: []
	W1124 09:56:45.329445 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:45.329453 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:45.329464 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:45.345344 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:45.345361 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:45.412742 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:45.404962   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.405721   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.406519   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.407450   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:45.408946   16504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:45.412752 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:45.412763 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:45.493978 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:45.493998 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:45.531425 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:45.531441 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.098018 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:48.108764 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:48.108836 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:48.134307 1849924 cri.go:89] found id: ""
	I1124 09:56:48.134321 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.134328 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:48.134333 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:48.134390 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:48.159252 1849924 cri.go:89] found id: ""
	I1124 09:56:48.159266 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.159273 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:48.159279 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:48.159337 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:48.184464 1849924 cri.go:89] found id: ""
	I1124 09:56:48.184478 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.184496 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:48.184507 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:48.184589 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:48.209500 1849924 cri.go:89] found id: ""
	I1124 09:56:48.209513 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.209520 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:48.209526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:48.209590 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:48.236025 1849924 cri.go:89] found id: ""
	I1124 09:56:48.236039 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.236045 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:48.236051 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:48.236121 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:48.262196 1849924 cri.go:89] found id: ""
	I1124 09:56:48.262210 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.262216 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:48.262222 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:48.262285 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:48.286684 1849924 cri.go:89] found id: ""
	I1124 09:56:48.286698 1849924 logs.go:282] 0 containers: []
	W1124 09:56:48.286705 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:48.286712 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:48.286725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:48.354155 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:48.354174 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:48.369606 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:48.369625 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:48.436183 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:48.427743   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.428311   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.429968   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.430492   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:48.432091   16610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:48.436193 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:48.436207 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:48.516667 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:48.516688 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.047020 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:51.057412 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:51.057477 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:51.087137 1849924 cri.go:89] found id: ""
	I1124 09:56:51.087159 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.087167 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:51.087172 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:51.087241 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:51.115003 1849924 cri.go:89] found id: ""
	I1124 09:56:51.115018 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.115025 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:51.115031 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:51.115093 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:51.144604 1849924 cri.go:89] found id: ""
	I1124 09:56:51.144622 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.144631 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:51.144638 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:51.144706 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:51.172310 1849924 cri.go:89] found id: ""
	I1124 09:56:51.172323 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.172338 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:51.172345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:51.172413 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:51.200354 1849924 cri.go:89] found id: ""
	I1124 09:56:51.200376 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.200384 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:51.200390 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:51.200463 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:51.225889 1849924 cri.go:89] found id: ""
	I1124 09:56:51.225903 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.225911 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:51.225917 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:51.225974 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:51.250937 1849924 cri.go:89] found id: ""
	I1124 09:56:51.250950 1849924 logs.go:282] 0 containers: []
	W1124 09:56:51.250956 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:51.250972 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:51.250984 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:51.281935 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:51.281951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:51.346955 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:51.346975 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:51.362412 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:51.362428 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:51.424513 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:51.416630   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.417425   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419110   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.419410   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:51.420894   16725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:51.424523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:51.424534 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.006160 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:54.017499 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:54.017565 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:54.048035 1849924 cri.go:89] found id: ""
	I1124 09:56:54.048049 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.048056 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:54.048062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:54.048117 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:54.075193 1849924 cri.go:89] found id: ""
	I1124 09:56:54.075207 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.075214 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:54.075220 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:54.075278 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:54.101853 1849924 cri.go:89] found id: ""
	I1124 09:56:54.101868 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.101875 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:54.101880 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:54.101938 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:54.128585 1849924 cri.go:89] found id: ""
	I1124 09:56:54.128600 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.128608 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:54.128614 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:54.128673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:54.154726 1849924 cri.go:89] found id: ""
	I1124 09:56:54.154742 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.154750 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:54.154756 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:54.154819 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:54.180936 1849924 cri.go:89] found id: ""
	I1124 09:56:54.180975 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.180984 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:54.180990 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:54.181070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:54.209038 1849924 cri.go:89] found id: ""
	I1124 09:56:54.209060 1849924 logs.go:282] 0 containers: []
	W1124 09:56:54.209067 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:54.209075 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:54.209085 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:54.279263 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:54.279289 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:54.295105 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:54.295131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:54.367337 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:54.358441   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.359306   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361009   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.361695   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:54.363190   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:54.367348 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:54.367360 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:54.442973 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:54.442995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:56.980627 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:56.990375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:56.990434 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:57.016699 1849924 cri.go:89] found id: ""
	I1124 09:56:57.016713 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.016720 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:57.016726 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:57.016789 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:57.042924 1849924 cri.go:89] found id: ""
	I1124 09:56:57.042938 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.042945 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:57.042950 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:57.043009 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:56:57.071972 1849924 cri.go:89] found id: ""
	I1124 09:56:57.071986 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.071993 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:56:57.071998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:56:57.072057 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:56:57.097765 1849924 cri.go:89] found id: ""
	I1124 09:56:57.097780 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.097789 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:56:57.097796 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:56:57.097861 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:56:57.124764 1849924 cri.go:89] found id: ""
	I1124 09:56:57.124778 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.124796 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:56:57.124802 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:56:57.124871 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:56:57.151558 1849924 cri.go:89] found id: ""
	I1124 09:56:57.151584 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.151591 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:56:57.151597 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:56:57.151667 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:56:57.178335 1849924 cri.go:89] found id: ""
	I1124 09:56:57.178348 1849924 logs.go:282] 0 containers: []
	W1124 09:56:57.178355 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:56:57.178372 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:56:57.178383 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:56:57.253968 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:56:57.253988 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:56:57.284364 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:56:57.284380 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:56:57.349827 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:56:57.349847 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:56:57.364617 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:56:57.364633 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:56:57.425688 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:56:57.417842   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.418692   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420242   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.420551   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:56:57.422041   16939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:56:59.926489 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:56:59.936801 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:56:59.936870 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:56:59.961715 1849924 cri.go:89] found id: ""
	I1124 09:56:59.961728 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.961735 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:56:59.961741 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:56:59.961801 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:56:59.990466 1849924 cri.go:89] found id: ""
	I1124 09:56:59.990480 1849924 logs.go:282] 0 containers: []
	W1124 09:56:59.990488 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:56:59.990494 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:56:59.990554 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:00.129137 1849924 cri.go:89] found id: ""
	I1124 09:57:00.129161 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.129169 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:00.129175 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:00.129257 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:00.211462 1849924 cri.go:89] found id: ""
	I1124 09:57:00.211478 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.211490 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:00.211506 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:00.211593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:00.274315 1849924 cri.go:89] found id: ""
	I1124 09:57:00.274338 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.274346 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:00.274363 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:00.274453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:00.321199 1849924 cri.go:89] found id: ""
	I1124 09:57:00.321233 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.321241 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:00.321247 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:00.321324 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:00.372845 1849924 cri.go:89] found id: ""
	I1124 09:57:00.372861 1849924 logs.go:282] 0 containers: []
	W1124 09:57:00.372869 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:00.372878 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:00.372889 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:00.444462 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:00.444485 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:00.465343 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:00.465381 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:00.553389 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:00.544084   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.544891   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547044   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.547489   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:00.549393   17036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:00.553402 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:00.553418 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:00.632199 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:00.632219 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:03.162773 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:03.173065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:03.173150 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:03.200418 1849924 cri.go:89] found id: ""
	I1124 09:57:03.200431 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.200439 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:03.200444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:03.200502 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:03.227983 1849924 cri.go:89] found id: ""
	I1124 09:57:03.227997 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.228004 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:03.228009 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:03.228070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:03.257554 1849924 cri.go:89] found id: ""
	I1124 09:57:03.257568 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.257575 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:03.257581 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:03.257639 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:03.283198 1849924 cri.go:89] found id: ""
	I1124 09:57:03.283210 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.283217 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:03.283223 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:03.283280 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:03.307981 1849924 cri.go:89] found id: ""
	I1124 09:57:03.307994 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.308002 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:03.308007 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:03.308063 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:03.337021 1849924 cri.go:89] found id: ""
	I1124 09:57:03.337035 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.337042 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:03.337047 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:03.337130 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:03.362116 1849924 cri.go:89] found id: ""
	I1124 09:57:03.362130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:03.362137 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:03.362144 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:03.362155 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:03.427932 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:03.427951 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:03.442952 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:03.442968 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:03.527978 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:03.519058   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.519868   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.521732   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.522423   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:03.524179   17136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:03.527989 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:03.528002 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:03.603993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:03.604012 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.134966 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:06.147607 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:06.147673 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:06.173217 1849924 cri.go:89] found id: ""
	I1124 09:57:06.173231 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.173238 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:06.173243 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:06.173302 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:06.203497 1849924 cri.go:89] found id: ""
	I1124 09:57:06.203511 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.203518 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:06.203524 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:06.203581 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:06.232192 1849924 cri.go:89] found id: ""
	I1124 09:57:06.232205 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.232212 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:06.232219 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:06.232276 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:06.261698 1849924 cri.go:89] found id: ""
	I1124 09:57:06.261711 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.261717 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:06.261723 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:06.261779 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:06.286623 1849924 cri.go:89] found id: ""
	I1124 09:57:06.286642 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.286650 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:06.286656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:06.286717 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:06.316085 1849924 cri.go:89] found id: ""
	I1124 09:57:06.316098 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.316105 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:06.316110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:06.316169 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:06.344243 1849924 cri.go:89] found id: ""
	I1124 09:57:06.344257 1849924 logs.go:282] 0 containers: []
	W1124 09:57:06.344264 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:06.344273 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:06.344283 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:06.375793 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:06.375809 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:06.441133 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:06.441160 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:06.457259 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:06.457282 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:06.534017 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:06.525924   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.526335   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.527997   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.528489   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:06.530105   17257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:06.534028 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:06.534040 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.110740 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:09.122421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:09.122484 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:09.148151 1849924 cri.go:89] found id: ""
	I1124 09:57:09.148165 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.148172 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:09.148177 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:09.148235 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:09.173265 1849924 cri.go:89] found id: ""
	I1124 09:57:09.173279 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.173288 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:09.173295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:09.173357 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:09.198363 1849924 cri.go:89] found id: ""
	I1124 09:57:09.198377 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.198384 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:09.198389 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:09.198447 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:09.224567 1849924 cri.go:89] found id: ""
	I1124 09:57:09.224581 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.224588 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:09.224594 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:09.224652 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:09.249182 1849924 cri.go:89] found id: ""
	I1124 09:57:09.249195 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.249205 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:09.249210 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:09.249281 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:09.274039 1849924 cri.go:89] found id: ""
	I1124 09:57:09.274053 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.274060 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:09.274065 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:09.274125 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:09.299730 1849924 cri.go:89] found id: ""
	I1124 09:57:09.299744 1849924 logs.go:282] 0 containers: []
	W1124 09:57:09.299751 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:09.299758 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:09.299770 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:09.364094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:09.355260   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.356001   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.357656   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.358611   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:09.359441   17339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:09.364105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:09.364120 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:09.441482 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:09.441504 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:09.479944 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:09.479961 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:09.549349 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:09.549367 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:12.064927 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:12.075315 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:12.075376 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:12.103644 1849924 cri.go:89] found id: ""
	I1124 09:57:12.103658 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.103665 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:12.103670 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:12.103774 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:12.129120 1849924 cri.go:89] found id: ""
	I1124 09:57:12.129134 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.129141 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:12.129147 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:12.129215 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:12.156010 1849924 cri.go:89] found id: ""
	I1124 09:57:12.156024 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.156031 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:12.156036 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:12.156094 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:12.184275 1849924 cri.go:89] found id: ""
	I1124 09:57:12.184289 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.184296 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:12.184301 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:12.184362 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:12.214700 1849924 cri.go:89] found id: ""
	I1124 09:57:12.214713 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.214726 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:12.214732 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:12.214792 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:12.239546 1849924 cri.go:89] found id: ""
	I1124 09:57:12.239559 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.239566 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:12.239572 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:12.239635 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:12.264786 1849924 cri.go:89] found id: ""
	I1124 09:57:12.264800 1849924 logs.go:282] 0 containers: []
	W1124 09:57:12.264806 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:12.264814 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:12.264826 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:12.324457 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:12.316852   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.317554   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.318633   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.319188   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:12.320818   17444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:12.324467 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:12.324477 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:12.401396 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:12.401417 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:12.432520 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:12.432535 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:12.502857 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:12.502877 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.018809 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:15.038661 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:15.038741 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:15.069028 1849924 cri.go:89] found id: ""
	I1124 09:57:15.069043 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.069050 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:15.069056 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:15.069139 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:15.096495 1849924 cri.go:89] found id: ""
	I1124 09:57:15.096513 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.096521 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:15.096526 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:15.096593 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:15.125417 1849924 cri.go:89] found id: ""
	I1124 09:57:15.125430 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.125438 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:15.125444 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:15.125508 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:15.152259 1849924 cri.go:89] found id: ""
	I1124 09:57:15.152274 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.152281 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:15.152287 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:15.152348 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:15.178920 1849924 cri.go:89] found id: ""
	I1124 09:57:15.178934 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.178942 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:15.178947 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:15.179024 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:15.207630 1849924 cri.go:89] found id: ""
	I1124 09:57:15.207643 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.207650 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:15.207656 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:15.207715 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:15.237971 1849924 cri.go:89] found id: ""
	I1124 09:57:15.237985 1849924 logs.go:282] 0 containers: []
	W1124 09:57:15.237992 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:15.238000 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:15.238011 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:15.305169 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:15.305187 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:15.320240 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:15.320257 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:15.393546 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:15.385402   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.386137   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.387859   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.388310   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:15.389937   17557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:15.393556 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:15.393592 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:15.470159 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:15.470179 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:18.001255 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:18.013421 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:18.013488 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:18.040787 1849924 cri.go:89] found id: ""
	I1124 09:57:18.040801 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.040808 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:18.040814 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:18.040873 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:18.066460 1849924 cri.go:89] found id: ""
	I1124 09:57:18.066475 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.066482 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:18.066487 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:18.066544 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:18.093970 1849924 cri.go:89] found id: ""
	I1124 09:57:18.093983 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.093990 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:18.093998 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:18.094070 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:18.119292 1849924 cri.go:89] found id: ""
	I1124 09:57:18.119306 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.119312 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:18.119318 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:18.119375 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:18.144343 1849924 cri.go:89] found id: ""
	I1124 09:57:18.144356 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.144363 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:18.144369 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:18.144428 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:18.176349 1849924 cri.go:89] found id: ""
	I1124 09:57:18.176362 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.176369 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:18.176375 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:18.176435 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:18.200900 1849924 cri.go:89] found id: ""
	I1124 09:57:18.200913 1849924 logs.go:282] 0 containers: []
	W1124 09:57:18.200920 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:18.200927 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:18.200938 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:18.266434 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:18.266452 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:18.281611 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:18.281627 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:18.347510 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:18.338744   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.339638   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341154   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.341618   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:18.343169   17661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:18.347523 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:18.347536 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:18.435234 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:18.435254 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:20.973569 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:20.984347 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:20.984418 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:21.011115 1849924 cri.go:89] found id: ""
	I1124 09:57:21.011130 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.011137 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:21.011142 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:21.011204 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:21.041877 1849924 cri.go:89] found id: ""
	I1124 09:57:21.041891 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.041899 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:21.041904 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:21.041963 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:21.067204 1849924 cri.go:89] found id: ""
	I1124 09:57:21.067217 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.067224 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:21.067229 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:21.067288 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:21.096444 1849924 cri.go:89] found id: ""
	I1124 09:57:21.096458 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.096464 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:21.096470 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:21.096526 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:21.122011 1849924 cri.go:89] found id: ""
	I1124 09:57:21.122025 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.122033 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:21.122038 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:21.122098 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:21.150504 1849924 cri.go:89] found id: ""
	I1124 09:57:21.150518 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.150525 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:21.150530 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:21.150601 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:21.179560 1849924 cri.go:89] found id: ""
	I1124 09:57:21.179573 1849924 logs.go:282] 0 containers: []
	W1124 09:57:21.179579 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:21.179587 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:21.179597 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:21.263112 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:21.263134 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:21.291875 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:21.291891 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:21.358120 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:21.358139 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:21.373381 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:21.373401 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:21.437277 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:21.428643   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.429550   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431264   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.431602   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:21.433182   17779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:23.938404 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:23.948703 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:23.948770 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:23.975638 1849924 cri.go:89] found id: ""
	I1124 09:57:23.975653 1849924 logs.go:282] 0 containers: []
	W1124 09:57:23.975660 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:23.975666 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:23.975797 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:24.003099 1849924 cri.go:89] found id: ""
	I1124 09:57:24.003114 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.003122 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:24.003127 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:24.003195 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:24.031320 1849924 cri.go:89] found id: ""
	I1124 09:57:24.031333 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.031340 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:24.031345 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:24.031412 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:24.057464 1849924 cri.go:89] found id: ""
	I1124 09:57:24.057479 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.057486 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:24.057491 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:24.057560 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:24.083571 1849924 cri.go:89] found id: ""
	I1124 09:57:24.083586 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.083593 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:24.083598 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:24.083656 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:24.109710 1849924 cri.go:89] found id: ""
	I1124 09:57:24.109724 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.109732 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:24.109737 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:24.109810 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:24.134957 1849924 cri.go:89] found id: ""
	I1124 09:57:24.134971 1849924 logs.go:282] 0 containers: []
	W1124 09:57:24.134978 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:24.134985 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:24.134995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:24.206698 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:24.206725 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:24.221977 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:24.221995 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:24.287450 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:24.278821   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.280376   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.281187   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.282207   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:24.283887   17871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:24.287461 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:24.287474 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:24.364870 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:24.364890 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:26.899825 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:26.911192 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:26.911260 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:26.937341 1849924 cri.go:89] found id: ""
	I1124 09:57:26.937355 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.937361 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:26.937367 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:26.937429 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:26.966037 1849924 cri.go:89] found id: ""
	I1124 09:57:26.966050 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.966057 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:26.966062 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:26.966119 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:26.994487 1849924 cri.go:89] found id: ""
	I1124 09:57:26.994501 1849924 logs.go:282] 0 containers: []
	W1124 09:57:26.994508 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:26.994514 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:26.994572 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:27.024331 1849924 cri.go:89] found id: ""
	I1124 09:57:27.024345 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.024351 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:27.024357 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:27.024414 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:27.051922 1849924 cri.go:89] found id: ""
	I1124 09:57:27.051936 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.051943 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:27.051949 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:27.052007 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:27.079084 1849924 cri.go:89] found id: ""
	I1124 09:57:27.079097 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.079104 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:27.079110 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:27.079166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:27.105333 1849924 cri.go:89] found id: ""
	I1124 09:57:27.105346 1849924 logs.go:282] 0 containers: []
	W1124 09:57:27.105362 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:27.105371 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:27.105399 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:27.136135 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:27.136151 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:27.202777 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:27.202797 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:27.218147 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:27.218169 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:27.287094 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:27.279109   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.279712   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281215   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.281830   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:27.282984   17988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:27.287105 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:27.287116 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:29.863883 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:29.874162 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:57:29.874270 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:57:29.899809 1849924 cri.go:89] found id: ""
	I1124 09:57:29.899825 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.899833 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 09:57:29.899839 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 09:57:29.899897 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:57:29.925268 1849924 cri.go:89] found id: ""
	I1124 09:57:29.925282 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.925289 1849924 logs.go:284] No container was found matching "etcd"
	I1124 09:57:29.925295 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 09:57:29.925355 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:57:29.953756 1849924 cri.go:89] found id: ""
	I1124 09:57:29.953770 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.953778 1849924 logs.go:284] No container was found matching "coredns"
	I1124 09:57:29.953783 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:57:29.953844 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:57:29.979723 1849924 cri.go:89] found id: ""
	I1124 09:57:29.979737 1849924 logs.go:282] 0 containers: []
	W1124 09:57:29.979744 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 09:57:29.979750 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:57:29.979809 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:57:30.029207 1849924 cri.go:89] found id: ""
	I1124 09:57:30.029223 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.029231 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:57:30.029237 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:57:30.029307 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:57:30.086347 1849924 cri.go:89] found id: ""
	I1124 09:57:30.086364 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.086374 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 09:57:30.086381 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 09:57:30.086453 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:57:30.117385 1849924 cri.go:89] found id: ""
	I1124 09:57:30.117412 1849924 logs.go:282] 0 containers: []
	W1124 09:57:30.117420 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 09:57:30.117429 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 09:57:30.117442 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:57:30.134069 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:57:30.134089 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:57:30.200106 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 09:57:30.191781   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.192521   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194151   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.194660   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 09:57:30.196222   18082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:57:30.200116 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 09:57:30.200131 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 09:57:30.277714 1849924 logs.go:123] Gathering logs for container status ...
	I1124 09:57:30.277734 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:57:30.306530 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 09:57:30.306548 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:57:32.873889 1849924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:57:32.884169 1849924 kubeadm.go:602] duration metric: took 4m3.946947382s to restartPrimaryControlPlane
	W1124 09:57:32.884229 1849924 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 09:57:32.884313 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 09:57:33.294612 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:57:33.307085 1849924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:57:33.314867 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:57:33.314936 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:57:33.322582 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:57:33.322593 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 09:57:33.322667 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 09:57:33.330196 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:57:33.330260 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:57:33.337917 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 09:57:33.345410 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:57:33.345471 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:57:33.352741 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.360084 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:57:33.360141 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:57:33.367359 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 09:57:33.374680 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:57:33.374740 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:57:33.381720 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:57:33.421475 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:57:33.421672 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:57:33.492568 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:57:33.492631 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 09:57:33.492668 1849924 kubeadm.go:319] OS: Linux
	I1124 09:57:33.492712 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:57:33.492759 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 09:57:33.492805 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:57:33.492852 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:57:33.492898 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:57:33.492945 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:57:33.492989 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:57:33.493036 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:57:33.493080 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 09:57:33.559811 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:57:33.559935 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:57:33.560031 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:57:33.569641 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:57:33.572593 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 09:57:33.572694 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:57:33.572778 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:57:33.572897 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 09:57:33.572970 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 09:57:33.573053 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 09:57:33.573134 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 09:57:33.573209 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 09:57:33.573281 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 09:57:33.573362 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 09:57:33.573444 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 09:57:33.573489 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 09:57:33.573554 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:57:34.404229 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:57:34.574070 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:57:34.974228 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:57:35.133185 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:57:35.260833 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:57:35.261355 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:57:35.265684 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:57:35.269119 1849924 out.go:252]   - Booting up control plane ...
	I1124 09:57:35.269213 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:57:35.269289 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:57:35.269807 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:57:35.284618 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:57:35.284910 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:57:35.293324 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:57:35.293620 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:57:35.293661 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:57:35.424973 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:57:35.425087 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:01:35.425195 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000242606s
	I1124 10:01:35.425226 1849924 kubeadm.go:319] 
	I1124 10:01:35.425316 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:01:35.425374 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:01:35.425488 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:01:35.425495 1849924 kubeadm.go:319] 
	I1124 10:01:35.425617 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:01:35.425655 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:01:35.425685 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:01:35.425690 1849924 kubeadm.go:319] 
	I1124 10:01:35.429378 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:01:35.429792 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:01:35.429899 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:01:35.430134 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:01:35.430138 1849924 kubeadm.go:319] 
	I1124 10:01:35.430206 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1124 10:01:35.430308 1849924 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000242606s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1124 10:01:35.430396 1849924 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 10:01:35.837421 1849924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:01:35.850299 1849924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:01:35.850356 1849924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:01:35.858169 1849924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:01:35.858180 1849924 kubeadm.go:158] found existing configuration files:
	
	I1124 10:01:35.858230 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 10:01:35.866400 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:01:35.866456 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:01:35.873856 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 10:01:35.881958 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:01:35.882015 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:01:35.889339 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.896920 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:01:35.896977 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:01:35.904670 1849924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 10:01:35.912117 1849924 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:01:35.912171 1849924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:01:35.919741 1849924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:01:35.956259 1849924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 10:01:35.956313 1849924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:01:36.031052 1849924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:01:36.031118 1849924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:01:36.031152 1849924 kubeadm.go:319] OS: Linux
	I1124 10:01:36.031196 1849924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:01:36.031243 1849924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:01:36.031289 1849924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:01:36.031336 1849924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:01:36.031383 1849924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:01:36.031430 1849924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:01:36.031474 1849924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:01:36.031521 1849924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:01:36.031566 1849924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:01:36.099190 1849924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:01:36.099321 1849924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:01:36.099441 1849924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 10:01:36.106857 1849924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:01:36.112186 1849924 out.go:252]   - Generating certificates and keys ...
	I1124 10:01:36.112274 1849924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:01:36.112337 1849924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:01:36.112413 1849924 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 10:01:36.112473 1849924 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 10:01:36.112542 1849924 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 10:01:36.112594 1849924 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 10:01:36.112656 1849924 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 10:01:36.112719 1849924 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 10:01:36.112792 1849924 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 10:01:36.112863 1849924 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 10:01:36.112900 1849924 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 10:01:36.112954 1849924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:01:36.197295 1849924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:01:36.531352 1849924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 10:01:36.984185 1849924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:01:37.290064 1849924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:01:37.558441 1849924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:01:37.559017 1849924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:01:37.561758 1849924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:01:37.564997 1849924 out.go:252]   - Booting up control plane ...
	I1124 10:01:37.565117 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:01:37.565200 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:01:37.566811 1849924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:01:37.581952 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:01:37.582056 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 10:01:37.589882 1849924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 10:01:37.590273 1849924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:01:37.590483 1849924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:01:37.733586 1849924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 10:01:37.733692 1849924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:05:37.728742 1849924 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000440097s
	I1124 10:05:37.728760 1849924 kubeadm.go:319] 
	I1124 10:05:37.729148 1849924 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:05:37.729217 1849924 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:05:37.729548 1849924 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:05:37.729554 1849924 kubeadm.go:319] 
	I1124 10:05:37.729744 1849924 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:05:37.729799 1849924 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:05:37.729853 1849924 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:05:37.729860 1849924 kubeadm.go:319] 
	I1124 10:05:37.734894 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:05:37.735345 1849924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:05:37.735452 1849924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:05:37.735693 1849924 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:05:37.735697 1849924 kubeadm.go:319] 
	I1124 10:05:37.735773 1849924 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 10:05:37.735829 1849924 kubeadm.go:403] duration metric: took 12m8.833752588s to StartCluster
	I1124 10:05:37.735872 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:05:37.735930 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:05:37.769053 1849924 cri.go:89] found id: ""
	I1124 10:05:37.769070 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.769076 1849924 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:05:37.769083 1849924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:05:37.769166 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:05:37.796753 1849924 cri.go:89] found id: ""
	I1124 10:05:37.796767 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.796774 1849924 logs.go:284] No container was found matching "etcd"
	I1124 10:05:37.796780 1849924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:05:37.796839 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:05:37.822456 1849924 cri.go:89] found id: ""
	I1124 10:05:37.822470 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.822487 1849924 logs.go:284] No container was found matching "coredns"
	I1124 10:05:37.822492 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:05:37.822556 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:05:37.847572 1849924 cri.go:89] found id: ""
	I1124 10:05:37.847587 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.847594 1849924 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:05:37.847601 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:05:37.847660 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:05:37.874600 1849924 cri.go:89] found id: ""
	I1124 10:05:37.874614 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.874621 1849924 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:05:37.874630 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:05:37.874694 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:05:37.899198 1849924 cri.go:89] found id: ""
	I1124 10:05:37.899212 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.899220 1849924 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:05:37.899226 1849924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:05:37.899286 1849924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:05:37.927492 1849924 cri.go:89] found id: ""
	I1124 10:05:37.927506 1849924 logs.go:282] 0 containers: []
	W1124 10:05:37.927513 1849924 logs.go:284] No container was found matching "kindnet"
	I1124 10:05:37.927521 1849924 logs.go:123] Gathering logs for kubelet ...
	I1124 10:05:37.927531 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:05:37.996934 1849924 logs.go:123] Gathering logs for dmesg ...
	I1124 10:05:37.996954 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:05:38.018248 1849924 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:05:38.018265 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:05:38.095385 1849924 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1124 10:05:38.087821   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.088311   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.089860   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.090192   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:38.091739   21865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:05:38.095401 1849924 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:05:38.095411 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:05:38.170993 1849924 logs.go:123] Gathering logs for container status ...
	I1124 10:05:38.171016 1849924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:05:38.204954 1849924 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 10:05:38.205004 1849924 out.go:285] * 
	W1124 10:05:38.205075 1849924 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.205091 1849924 out.go:285] * 
	W1124 10:05:38.207567 1849924 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:05:38.212617 1849924 out.go:203] 
	W1124 10:05:38.216450 1849924 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000440097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:05:38.216497 1849924 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 10:05:38.216516 1849924 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 10:05:38.219595 1849924 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338397081Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 09:53:27 functional-373432 crio[10735]: time="2025-11-24T09:53:27.338466046Z" level=info msg="No systemd watchdog enabled"
	Nov 24 09:53:27 functional-373432 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.563306518Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=16b574c8-5f01-4b5f-b4c1-033ff8df7e69 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.564186603Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=c16d1184-1db0-41cd-b079-b58f2a21c360 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.564711746Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dcdd2354-d66a-4ea6-b097-17376749f631 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.56539822Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3da7e1e-5602-4f94-87aa-f42cce3f944e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.565983081Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=5aac8310-fbf3-4ab4-abba-3add8b26d6c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.566558752Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9977cb6a-a164-4bf3-8414-583100475093 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 09:57:33 functional-373432 crio[10735]: time="2025-11-24T09:57:33.567059862Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5734ab5d-327c-48f0-9238-94a4932df1b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.102671605Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=652a2275-3cb5-4895-9bc9-26b562399a5a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.103518123Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=705a5d4b-cd71-4163-b52e-bdb52326e8e8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.104114725Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=26eba83c-7b31-451a-890a-d51786be660e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.104602994Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d922ec08-bcda-413d-8143-5c97b1367b6e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.10506595Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=470fbbd3-e46c-4376-b51e-18b84b192ec6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.105536807Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c84d94ac-c66d-4a84-b9e9-fb5342a05f00 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:01:36 functional-373432 crio[10735]: time="2025-11-24T10:01:36.105962946Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c833fe93-752f-447a-94cf-5fbf6c21285a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.532043866Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=4bd8726f-27a0-4e57-a7e3-512272096eef name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.571711909Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-373432" id=aef09199-0d9c-4fcd-a86e-4644b84003d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.571849232Z" level=info msg="Image docker.io/kicbase/echo-server:functional-373432 not found" id=aef09199-0d9c-4fcd-a86e-4644b84003d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.571892719Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-373432 found" id=aef09199-0d9c-4fcd-a86e-4644b84003d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601271581Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-373432" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.601433691Z" level=info msg="Image localhost/kicbase/echo-server:functional-373432 not found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:47 functional-373432 crio[10735]: time="2025-11-24T10:05:47.60148682Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-373432 found" id=19f8cf69-de30-4e40-ae82-0ac8778bea3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:05:48 functional-373432 crio[10735]: time="2025-11-24T10:05:48.673335433Z" level=info msg="Checking image status: kicbase/echo-server:functional-373432" id=df47687b-4b6a-4acb-8d1e-f46521441883 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1124 10:05:48.846164   22665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:48.846790   22665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:48.848401   22665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:48.848856   22665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1124 10:05:48.850370   22665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov24 08:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 08:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 09:13] overlayfs: idmapped layers are currently not supported
	[Nov24 09:19] overlayfs: idmapped layers are currently not supported
	[Nov24 09:20] overlayfs: idmapped layers are currently not supported
	[Nov24 09:33] FS-Cache: Duplicate cookie detected
	[  +0.001239] FS-Cache: O-cookie c=0000007f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001660] FS-Cache: O-cookie d=000000000bbdd1b9{9P.session} n=00000000b617e19b
	[  +0.001462] FS-Cache: O-key=[10] '34333032333239343338'
	[  +0.000827] FS-Cache: N-cookie c=00000080 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=000000000bbdd1b9{9P.session} n=00000000759d212e
	[  +0.001120] FS-Cache: N-key=[10] '34333032333239343338'
	[Nov24 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:05:48 up  8:48,  0 user,  load average: 0.52, 0.27, 0.39
	Linux functional-373432 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:05:46 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:46 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 973.
	Nov 24 10:05:46 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:46 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:47 functional-373432 kubelet[22453]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:47 functional-373432 kubelet[22453]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:47 functional-373432 kubelet[22453]: E1124 10:05:47.037338   22453 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:47 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:47 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:47 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 974.
	Nov 24 10:05:47 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:47 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:47 functional-373432 kubelet[22518]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:47 functional-373432 kubelet[22518]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:47 functional-373432 kubelet[22518]: E1124 10:05:47.776407   22518 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:47 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:47 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:05:48 functional-373432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 975.
	Nov 24 10:05:48 functional-373432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:48 functional-373432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:05:48 functional-373432 kubelet[22555]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:48 functional-373432 kubelet[22555]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:05:48 functional-373432 kubelet[22555]: E1124 10:05:48.552836   22555 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:05:48 functional-373432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:05:48 functional-373432 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-373432 -n functional-373432: exit status 2 (447.901217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-373432" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-373432" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-373432" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-373432
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 image load --daemon kicbase/echo-server:functional-373432 --alsologtostderr: (1.040801653s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-373432" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image save kicbase/echo-server:functional-373432 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 10:05:51.776272 1864262 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:05:51.776450 1864262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:05:51.776482 1864262 out.go:374] Setting ErrFile to fd 2...
	I1124 10:05:51.776504 1864262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:05:51.776774 1864262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:05:51.777469 1864262 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:05:51.777648 1864262 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:05:51.778239 1864262 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
	I1124 10:05:51.810468 1864262 ssh_runner.go:195] Run: systemctl --version
	I1124 10:05:51.810530 1864262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
	I1124 10:05:51.833724 1864262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
	I1124 10:05:51.941881 1864262 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1124 10:05:51.941980 1864262 cache_images.go:255] Failed to load cached images for "functional-373432": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1124 10:05:51.942011 1864262 cache_images.go:267] failed pushing to: functional-373432

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-373432
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image save --daemon kicbase/echo-server:functional-373432 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-373432
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-373432: exit status 1 (19.072974ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-373432

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-373432

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1124 10:05:52.667339 1864538 out.go:360] Setting OutFile to fd 1 ...
I1124 10:05:52.667450 1864538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:05:52.667458 1864538 out.go:374] Setting ErrFile to fd 2...
I1124 10:05:52.667463 1864538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:05:52.667727 1864538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:05:52.667989 1864538 mustload.go:66] Loading cluster: functional-373432
I1124 10:05:52.668434 1864538 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:05:52.668947 1864538 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:05:52.725431 1864538 host.go:66] Checking if "functional-373432" exists ...
I1124 10:05:52.725786 1864538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1124 10:05:52.839771 1864538 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 10:05:52.824078886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1124 10:05:52.839896 1864538 api_server.go:166] Checking apiserver status ...
I1124 10:05:52.839957 1864538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1124 10:05:52.840001 1864538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:05:52.876586 1864538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
W1124 10:05:53.010397 1864538 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1124 10:05:53.013817 1864538 out.go:179] * The control-plane node functional-373432 apiserver is not running: (state=Stopped)
I1124 10:05:53.016938 1864538 out.go:179]   To start a cluster, run: "minikube start -p functional-373432"

                                                
                                                
stdout: * The control-plane node functional-373432 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-373432"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1864537: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-373432 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-373432 apply -f testdata/testsvc.yaml: exit status 1 (94.778424ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-373432 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (124.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.107.117.210": Temporary Error: Get "http://10.107.117.210": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-373432 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-373432 get svc nginx-svc: exit status 1 (73.140424ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-373432 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (124.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-373432 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-373432 create deployment hello-node --image kicbase/echo-server: exit status 1 (66.575581ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-373432 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 service list: exit status 103 (297.224284ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373432 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-373432"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-373432 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-373432 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-373432\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 service list -o json: exit status 103 (256.564258ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373432 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-373432"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-373432 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 service --namespace=default --https --url hello-node: exit status 103 (266.247592ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373432 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-373432"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-373432 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 service hello-node --url --format={{.IP}}: exit status 103 (264.300856ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373432 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-373432"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-373432 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-373432 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-373432\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 service hello-node --url: exit status 103 (263.677048ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373432 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-373432"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-373432 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-373432 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-373432"
functional_test.go:1579: failed to parse "* The control-plane node functional-373432 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-373432\"": parse "* The control-plane node functional-373432 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-373432\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763978885329930944" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763978885329930944" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763978885329930944" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001/test-1763978885329930944
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 10:08 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 10:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 10:08 test-1763978885329930944
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh cat /mount-9p/test-1763978885329930944
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-373432 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-373432 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.727145ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-373432 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (287.447208ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=33491)
	total 2
	-rw-r--r-- 1 docker docker 24 Nov 24 10:08 created-by-test
	-rw-r--r-- 1 docker docker 24 Nov 24 10:08 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Nov 24 10:08 test-1763978885329930944
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-373432 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:33491
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001:/mount-9p --alsologtostderr -v=1] stderr:
I1124 10:08:05.379230 1866953 out.go:360] Setting OutFile to fd 1 ...
I1124 10:08:05.379406 1866953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:05.379419 1866953 out.go:374] Setting ErrFile to fd 2...
I1124 10:08:05.379453 1866953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:05.380060 1866953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:08:05.380396 1866953 mustload.go:66] Loading cluster: functional-373432
I1124 10:08:05.383403 1866953 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:05.383999 1866953 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:08:05.402687 1866953 host.go:66] Checking if "functional-373432" exists ...
I1124 10:08:05.403008 1866953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1124 10:08:05.474772 1866953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:05.462890014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1124 10:08:05.474948 1866953 cli_runner.go:164] Run: docker network inspect functional-373432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 10:08:05.495573 1866953 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001 into VM as /mount-9p ...
I1124 10:08:05.498730 1866953 out.go:179]   - Mount type:   9p
I1124 10:08:05.501704 1866953 out.go:179]   - User ID:      docker
I1124 10:08:05.504914 1866953 out.go:179]   - Group ID:     docker
I1124 10:08:05.507892 1866953 out.go:179]   - Version:      9p2000.L
I1124 10:08:05.510653 1866953 out.go:179]   - Message Size: 262144
I1124 10:08:05.515422 1866953 out.go:179]   - Options:      map[]
I1124 10:08:05.519227 1866953 out.go:179]   - Bind Address: 192.168.49.1:33491
I1124 10:08:05.522370 1866953 out.go:179] * Userspace file server: 
I1124 10:08:05.522667 1866953 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1124 10:08:05.522749 1866953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:08:05.542991 1866953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
I1124 10:08:05.652347 1866953 mount.go:180] unmount for /mount-9p ran successfully
I1124 10:08:05.652378 1866953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1124 10:08:05.661186 1866953 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=33491,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1124 10:08:05.672230 1866953 main.go:127] stdlog: ufs.go:141 connected
I1124 10:08:05.672396 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tversion tag 65535 msize 262144 version '9P2000.L'
I1124 10:08:05.672437 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rversion tag 65535 msize 262144 version '9P2000'
I1124 10:08:05.672670 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1124 10:08:05.672735 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rattach tag 0 aqid (15c44df b555b8d0 'd')
I1124 10:08:05.673543 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1124 10:08:05.673615 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c44df b555b8d0 'd') m d775 at 0 mt 1763978885 l 4096 t 0 d 0 ext )
I1124 10:08:05.683371 1866953 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/.mount-process: {Name:mkbd77400e2008f642b31aaffc01123493e98480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1124 10:08:05.683598 1866953 mount.go:105] mount successful: ""
I1124 10:08:05.686932 1866953 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3453839923/001 to /mount-9p
I1124 10:08:05.689880 1866953 out.go:203] 
I1124 10:08:05.694322 1866953 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1124 10:08:05.973852 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1124 10:08:05.973933 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c44df b555b8d0 'd') m d775 at 0 mt 1763978885 l 4096 t 0 d 0 ext )
I1124 10:08:05.974288 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 
I1124 10:08:05.974358 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 
I1124 10:08:05.974518 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Topen tag 0 fid 1 mode 0
I1124 10:08:05.974605 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Ropen tag 0 qid (15c44df b555b8d0 'd') iounit 0
I1124 10:08:05.974750 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1124 10:08:05.974800 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c44df b555b8d0 'd') m d775 at 0 mt 1763978885 l 4096 t 0 d 0 ext )
I1124 10:08:05.974973 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 0 count 262120
I1124 10:08:05.975090 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 258
I1124 10:08:05.975231 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 261862
I1124 10:08:05.975258 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:05.975375 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1124 10:08:05.975403 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:05.975543 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1124 10:08:05.975576 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (15c44e0 b555b8d0 '') 
I1124 10:08:05.975686 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:05.975743 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c44e0 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:05.975878 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:05.975938 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c44e0 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:05.976057 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1124 10:08:05.976089 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:05.976232 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'test-1763978885329930944' 
I1124 10:08:05.976273 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (15c44e2 b555b8d0 '') 
I1124 10:08:05.976398 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:05.976430 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1763978885329930944' 'jenkins' 'jenkins' '' q (15c44e2 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:05.976549 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:05.976589 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1763978885329930944' 'jenkins' 'jenkins' '' q (15c44e2 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:05.976720 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1124 10:08:05.976748 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:05.976877 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1124 10:08:05.976913 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (15c44e1 b555b8d0 '') 
I1124 10:08:05.977124 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:05.977188 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c44e1 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:05.977325 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:05.977359 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c44e1 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:05.977503 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1124 10:08:05.977541 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:05.977666 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1124 10:08:05.977694 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:05.977869 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 1
I1124 10:08:05.977911 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.265138 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 0:'test-1763978885329930944' 
I1124 10:08:06.265226 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (15c44e2 b555b8d0 '') 
I1124 10:08:06.265395 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 1
I1124 10:08:06.265443 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1763978885329930944' 'jenkins' 'jenkins' '' q (15c44e2 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.265598 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 1 newfid 2 
I1124 10:08:06.265634 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 
I1124 10:08:06.265750 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Topen tag 0 fid 2 mode 0
I1124 10:08:06.265801 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Ropen tag 0 qid (15c44e2 b555b8d0 '') iounit 0
I1124 10:08:06.265931 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 1
I1124 10:08:06.265970 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1763978885329930944' 'jenkins' 'jenkins' '' q (15c44e2 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.266110 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 2 offset 0 count 262120
I1124 10:08:06.266157 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 24
I1124 10:08:06.266303 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 2 offset 24 count 262120
I1124 10:08:06.266329 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:06.266459 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 2 offset 24 count 262120
I1124 10:08:06.266492 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:06.266649 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1124 10:08:06.266697 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.266913 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 1
I1124 10:08:06.266946 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.616151 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1124 10:08:06.616235 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c44df b555b8d0 'd') m d775 at 0 mt 1763978885 l 4096 t 0 d 0 ext )
I1124 10:08:06.616598 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 
I1124 10:08:06.616635 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 
I1124 10:08:06.616768 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Topen tag 0 fid 1 mode 0
I1124 10:08:06.616819 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Ropen tag 0 qid (15c44df b555b8d0 'd') iounit 0
I1124 10:08:06.616973 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1124 10:08:06.617028 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c44df b555b8d0 'd') m d775 at 0 mt 1763978885 l 4096 t 0 d 0 ext )
I1124 10:08:06.617194 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 0 count 262120
I1124 10:08:06.617330 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 258
I1124 10:08:06.617463 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 261862
I1124 10:08:06.617504 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:06.617640 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1124 10:08:06.617667 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:06.617803 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1124 10:08:06.617841 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (15c44e0 b555b8d0 '') 
I1124 10:08:06.617983 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:06.618029 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c44e0 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.618174 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:06.618211 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c44e0 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.618343 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1124 10:08:06.618368 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.618499 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'test-1763978885329930944' 
I1124 10:08:06.618533 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (15c44e2 b555b8d0 '') 
I1124 10:08:06.618670 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:06.618707 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1763978885329930944' 'jenkins' 'jenkins' '' q (15c44e2 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.618825 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:06.618858 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1763978885329930944' 'jenkins' 'jenkins' '' q (15c44e2 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.618994 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1124 10:08:06.619021 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.619168 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1124 10:08:06.619207 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (15c44e1 b555b8d0 '') 
I1124 10:08:06.619349 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:06.619388 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c44e1 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.619522 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1124 10:08:06.619567 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c44e1 b555b8d0 '') m 644 at 0 mt 1763978885 l 24 t 0 d 0 ext )
I1124 10:08:06.619698 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1124 10:08:06.619721 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.619847 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1124 10:08:06.619877 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1124 10:08:06.620017 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 1
I1124 10:08:06.620048 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.621354 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1124 10:08:06.621432 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rerror tag 0 ename 'file not found' ecode 0
I1124 10:08:06.892233 1866953 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 0
I1124 10:08:06.892286 1866953 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1124 10:08:06.893374 1866953 main.go:127] stdlog: ufs.go:147 disconnected
I1124 10:08:06.916887 1866953 out.go:179] * Unmounting /mount-9p ...
I1124 10:08:06.919957 1866953 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1124 10:08:06.928751 1866953 mount.go:180] unmount for /mount-9p ran successfully
I1124 10:08:06.928898 1866953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/.mount-process: {Name:mkbd77400e2008f642b31aaffc01123493e98480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1124 10:08:06.932152 1866953 out.go:203] 
W1124 10:08:06.935440 1866953 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1124 10:08:06.938321 1866953 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (1.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-605923 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-605923 --output=json --user=testUser: exit status 80 (1.69547093s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"60eb2331-08f9-4431-b616-e18c1ba762a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-605923 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"13296824-e1b4-4fb8-8916-1f35ee50d9e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T10:22:33Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"02d12cf3-2356-4fd0-8287-fe0bef871e32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-605923 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.83s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-605923 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-605923 --output=json --user=testUser: exit status 80 (1.831712348s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"faa7aa12-b18c-4e02-b392-ac9a1b5e50aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-605923 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"cbf15c31-5341-470a-853f-a6b36936c769","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T10:22:35Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"f519f706-7a09-48c4-9c18-035ba4f5d69c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-605923 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (802.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.450710797s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-306449
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-306449: (1.419099256s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-306449 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-306449 status --format={{.Host}}: exit status 7 (71.619339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m40.422763391s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-306449] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-306449" primary control-plane node in "kubernetes-upgrade-306449" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:41:46.889734 1986432 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:41:46.889863 1986432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:41:46.889878 1986432 out.go:374] Setting ErrFile to fd 2...
	I1124 10:41:46.889883 1986432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:41:46.890126 1986432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:41:46.890508 1986432 out.go:368] Setting JSON to false
	I1124 10:41:46.891376 1986432 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":33857,"bootTime":1763947050,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:41:46.891446 1986432 start.go:143] virtualization:  
	I1124 10:41:46.894228 1986432 out.go:179] * [kubernetes-upgrade-306449] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 10:41:46.898115 1986432 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:41:46.898265 1986432 notify.go:221] Checking for updates...
	I1124 10:41:46.903760 1986432 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:41:46.906574 1986432 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:41:46.909501 1986432 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:41:46.912298 1986432 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:41:46.915035 1986432 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:41:46.918414 1986432 config.go:182] Loaded profile config "kubernetes-upgrade-306449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 10:41:46.919044 1986432 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:41:46.946707 1986432 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:41:46.946824 1986432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:41:47.001920 1986432 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-24 10:41:46.991909727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:41:47.002060 1986432 docker.go:319] overlay module found
	I1124 10:41:47.006503 1986432 out.go:179] * Using the docker driver based on existing profile
	I1124 10:41:47.009486 1986432 start.go:309] selected driver: docker
	I1124 10:41:47.009512 1986432 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-306449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-306449 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:41:47.009621 1986432 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:41:47.010349 1986432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:41:47.066874 1986432 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-24 10:41:47.058058448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:41:47.067220 1986432 cni.go:84] Creating CNI manager for ""
	I1124 10:41:47.067295 1986432 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:41:47.067341 1986432 start.go:353] cluster config:
	{Name:kubernetes-upgrade-306449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-306449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:41:47.070556 1986432 out.go:179] * Starting "kubernetes-upgrade-306449" primary control-plane node in "kubernetes-upgrade-306449" cluster
	I1124 10:41:47.073347 1986432 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 10:41:47.076200 1986432 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 10:41:47.079106 1986432 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 10:41:47.079178 1986432 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 10:41:47.098796 1986432 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 10:41:47.098818 1986432 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 10:41:47.139134 1986432 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1124 10:41:47.303311 1986432 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1124 10:41:47.303520 1986432 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/config.json ...
	I1124 10:41:47.303613 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:41:47.303804 1986432 cache.go:243] Successfully downloaded all kic artifacts
	I1124 10:41:47.303857 1986432 start.go:360] acquireMachinesLock for kubernetes-upgrade-306449: {Name:mke8b566d28d631e0ad726a5f23c1ec00a529b57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:47.303926 1986432 start.go:364] duration metric: took 30.95µs to acquireMachinesLock for "kubernetes-upgrade-306449"
	I1124 10:41:47.303958 1986432 start.go:96] Skipping create...Using existing machine configuration
	I1124 10:41:47.303976 1986432 fix.go:54] fixHost starting: 
	I1124 10:41:47.304268 1986432 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-306449 --format={{.State.Status}}
	I1124 10:41:47.321296 1986432 fix.go:112] recreateIfNeeded on kubernetes-upgrade-306449: state=Stopped err=<nil>
	W1124 10:41:47.321329 1986432 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 10:41:47.324525 1986432 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-306449" ...
	I1124 10:41:47.324613 1986432 cli_runner.go:164] Run: docker start kubernetes-upgrade-306449
	I1124 10:41:47.616971 1986432 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-306449 --format={{.State.Status}}
	I1124 10:41:47.655375 1986432 kic.go:430] container "kubernetes-upgrade-306449" state is running.
	I1124 10:41:47.658070 1986432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-306449
	I1124 10:41:47.692363 1986432 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/config.json ...
	I1124 10:41:47.692627 1986432 machine.go:94] provisionDockerMachine start ...
	I1124 10:41:47.692722 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:47.725268 1986432 main.go:143] libmachine: Using SSH client type: native
	I1124 10:41:47.725620 1986432 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35230 <nil> <nil>}
	I1124 10:41:47.725630 1986432 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 10:41:47.726694 1986432 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 10:41:47.759425 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:41:47.960460 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:41:48.143621 1986432 cache.go:107] acquiring lock: {Name:mk51c6509d867afa1860460e7f818b0fd6c6ffc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.143746 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 10:41:48.143762 1986432 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 158.937µs
	I1124 10:41:48.143771 1986432 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 10:41:48.143787 1986432 cache.go:107] acquiring lock: {Name:mkc3339989ad679c75da3535f339de2ab264c13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.143822 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 10:41:48.143831 1986432 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.041µs
	I1124 10:41:48.143839 1986432 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 10:41:48.143849 1986432 cache.go:107] acquiring lock: {Name:mk50cf3cddc2c196180538068faac25fc91cc6d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.143883 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 10:41:48.143894 1986432 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 46.114µs
	I1124 10:41:48.143900 1986432 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 10:41:48.143912 1986432 cache.go:107] acquiring lock: {Name:mk89b78abe6d458855fa20186ec8933dc572c637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.143960 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 10:41:48.143977 1986432 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 61.745µs
	I1124 10:41:48.143984 1986432 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 10:41:48.143994 1986432 cache.go:107] acquiring lock: {Name:mk98a86e7676175e816d9238de813bf7e0a6830b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.144028 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 10:41:48.144040 1986432 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 46.327µs
	I1124 10:41:48.144046 1986432 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 10:41:48.144062 1986432 cache.go:107] acquiring lock: {Name:mk304e06012edc32b22f97fa9d23c59634087187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.144093 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 10:41:48.144103 1986432 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.65µs
	I1124 10:41:48.144109 1986432 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 10:41:48.144123 1986432 cache.go:107] acquiring lock: {Name:mkf1b1225277d6cf64aaef5e38f73b701e50ac5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.144155 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 10:41:48.144166 1986432 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0" took 44.678µs
	I1124 10:41:48.144172 1986432 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 10:41:48.144182 1986432 cache.go:107] acquiring lock: {Name:mk135248a36ecc47ba05e973285f4354a467493e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:41:48.144216 1986432 cache.go:115] /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 10:41:48.144226 1986432 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 46.023µs
	I1124 10:41:48.144245 1986432 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 10:41:48.144254 1986432 cache.go:87] Successfully saved all images to host disk.
	I1124 10:41:50.912991 1986432 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-306449
	
	I1124 10:41:50.913014 1986432 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-306449"
	I1124 10:41:50.913090 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:50.940319 1986432 main.go:143] libmachine: Using SSH client type: native
	I1124 10:41:50.940641 1986432 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35230 <nil> <nil>}
	I1124 10:41:50.940652 1986432 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-306449 && echo "kubernetes-upgrade-306449" | sudo tee /etc/hostname
	I1124 10:41:51.113794 1986432 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-306449
	
	I1124 10:41:51.113872 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:51.135888 1986432 main.go:143] libmachine: Using SSH client type: native
	I1124 10:41:51.136214 1986432 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35230 <nil> <nil>}
	I1124 10:41:51.136230 1986432 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-306449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-306449/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-306449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 10:41:51.301937 1986432 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 10:41:51.301965 1986432 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 10:41:51.301994 1986432 ubuntu.go:190] setting up certificates
	I1124 10:41:51.302003 1986432 provision.go:84] configureAuth start
	I1124 10:41:51.302077 1986432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-306449
	I1124 10:41:51.319520 1986432 provision.go:143] copyHostCerts
	I1124 10:41:51.319606 1986432 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 10:41:51.319622 1986432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 10:41:51.319685 1986432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 10:41:51.319845 1986432 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 10:41:51.319851 1986432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 10:41:51.319888 1986432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 10:41:51.319969 1986432 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 10:41:51.319974 1986432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 10:41:51.319997 1986432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 10:41:51.320056 1986432 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-306449 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-306449 localhost minikube]
	I1124 10:41:51.775414 1986432 provision.go:177] copyRemoteCerts
	I1124 10:41:51.775494 1986432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 10:41:51.775543 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:51.799657 1986432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35230 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/kubernetes-upgrade-306449/id_rsa Username:docker}
	I1124 10:41:51.917966 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 10:41:51.938680 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1124 10:41:51.959348 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 10:41:51.979523 1986432 provision.go:87] duration metric: took 677.498623ms to configureAuth
	I1124 10:41:51.979550 1986432 ubuntu.go:206] setting minikube options for container-runtime
	I1124 10:41:51.979742 1986432 config.go:182] Loaded profile config "kubernetes-upgrade-306449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:41:51.979847 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:51.997609 1986432 main.go:143] libmachine: Using SSH client type: native
	I1124 10:41:51.997936 1986432 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35230 <nil> <nil>}
	I1124 10:41:51.997957 1986432 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 10:41:52.351575 1986432 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 10:41:52.351649 1986432 machine.go:97] duration metric: took 4.659007717s to provisionDockerMachine
	I1124 10:41:52.351678 1986432 start.go:293] postStartSetup for "kubernetes-upgrade-306449" (driver="docker")
	I1124 10:41:52.351723 1986432 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 10:41:52.351824 1986432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 10:41:52.351899 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:52.380071 1986432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35230 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/kubernetes-upgrade-306449/id_rsa Username:docker}
	I1124 10:41:52.490321 1986432 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 10:41:52.494567 1986432 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 10:41:52.494593 1986432 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 10:41:52.494605 1986432 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 10:41:52.494660 1986432 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 10:41:52.494740 1986432 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 10:41:52.494841 1986432 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 10:41:52.503417 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:41:52.523522 1986432 start.go:296] duration metric: took 171.8161ms for postStartSetup
	I1124 10:41:52.523621 1986432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:41:52.523668 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:52.557813 1986432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35230 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/kubernetes-upgrade-306449/id_rsa Username:docker}
	I1124 10:41:52.673689 1986432 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 10:41:52.679419 1986432 fix.go:56] duration metric: took 5.37543576s for fixHost
	I1124 10:41:52.679441 1986432 start.go:83] releasing machines lock for "kubernetes-upgrade-306449", held for 5.375492154s
	I1124 10:41:52.679522 1986432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-306449
	I1124 10:41:52.705597 1986432 ssh_runner.go:195] Run: cat /version.json
	I1124 10:41:52.705658 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:52.705930 1986432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 10:41:52.705992 1986432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-306449
	I1124 10:41:52.735804 1986432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35230 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/kubernetes-upgrade-306449/id_rsa Username:docker}
	I1124 10:41:52.748454 1986432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35230 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/kubernetes-upgrade-306449/id_rsa Username:docker}
	I1124 10:41:52.849365 1986432 ssh_runner.go:195] Run: systemctl --version
	I1124 10:41:52.952184 1986432 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 10:41:52.994971 1986432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 10:41:53.000937 1986432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 10:41:53.001024 1986432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 10:41:53.011934 1986432 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 10:41:53.011957 1986432 start.go:496] detecting cgroup driver to use...
	I1124 10:41:53.011991 1986432 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 10:41:53.012041 1986432 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 10:41:53.031892 1986432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 10:41:53.055329 1986432 docker.go:218] disabling cri-docker service (if available) ...
	I1124 10:41:53.055406 1986432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 10:41:53.073037 1986432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 10:41:53.089022 1986432 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 10:41:53.245496 1986432 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 10:41:53.404747 1986432 docker.go:234] disabling docker service ...
	I1124 10:41:53.404822 1986432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 10:41:53.425345 1986432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 10:41:53.447706 1986432 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 10:41:53.609706 1986432 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 10:41:53.789461 1986432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 10:41:53.807091 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 10:41:53.828840 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:41:54.010629 1986432 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 10:41:54.010736 1986432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:41:54.046157 1986432 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 10:41:54.046223 1986432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:41:54.081405 1986432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:41:54.091781 1986432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:41:54.102140 1986432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 10:41:54.122216 1986432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:41:54.134672 1986432 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:41:54.147016 1986432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:41:54.157606 1986432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 10:41:54.165652 1986432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 10:41:54.173180 1986432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:41:54.293672 1986432 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 10:41:54.489483 1986432 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 10:41:54.489564 1986432 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 10:41:54.493571 1986432 start.go:564] Will wait 60s for crictl version
	I1124 10:41:54.493651 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:54.497203 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 10:41:54.523094 1986432 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 10:41:54.523250 1986432 ssh_runner.go:195] Run: crio --version
	I1124 10:41:54.552821 1986432 ssh_runner.go:195] Run: crio --version
	I1124 10:41:54.588910 1986432 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1124 10:41:54.591791 1986432 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-306449 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 10:41:54.611570 1986432 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 10:41:54.616866 1986432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 10:41:54.627886 1986432 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-306449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-306449 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 10:41:54.628067 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:41:54.787217 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:41:54.963923 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:41:55.120724 1986432 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1124 10:41:55.120806 1986432 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 10:41:55.168036 1986432 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 10:41:55.168064 1986432 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 10:41:55.168119 1986432 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 10:41:55.168361 1986432 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 10:41:55.168523 1986432 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 10:41:55.168630 1986432 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 10:41:55.168826 1986432 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 10:41:55.168966 1986432 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 10:41:55.169094 1986432 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 10:41:55.169261 1986432 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 10:41:55.170006 1986432 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 10:41:55.170929 1986432 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 10:41:55.171172 1986432 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 10:41:55.171864 1986432 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 10:41:55.172053 1986432 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 10:41:55.172190 1986432 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 10:41:55.172333 1986432 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 10:41:55.174281 1986432 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 10:41:55.498678 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 10:41:55.516387 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1124 10:41:55.516705 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 10:41:55.517202 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 10:41:55.521139 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 10:41:55.537341 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.24-0
	I1124 10:41:55.537730 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 10:41:55.730859 1986432 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1124 10:41:55.730927 1986432 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 10:41:55.730986 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:55.843521 1986432 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1124 10:41:55.843750 1986432 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 10:41:55.843854 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:55.860270 1986432 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1124 10:41:55.860305 1986432 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 10:41:55.860354 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:55.860424 1986432 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1124 10:41:55.860439 1986432 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 10:41:55.860461 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:55.860521 1986432 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1124 10:41:55.860535 1986432 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 10:41:55.860557 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:55.860620 1986432 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1124 10:41:55.860633 1986432 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 10:41:55.860652 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:55.860707 1986432 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca" in container runtime
	I1124 10:41:55.860719 1986432 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 10:41:55.860742 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:55.860810 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 10:41:55.860861 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 10:41:55.885817 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 10:41:55.885893 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 10:41:55.887808 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 10:41:55.889773 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 10:41:56.067439 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 10:41:56.067547 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 10:41:56.067630 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 10:41:56.067687 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 10:41:56.067759 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 10:41:56.067823 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 10:41:56.067978 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 10:41:56.254309 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 10:41:56.254387 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 10:41:56.254437 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 10:41:56.254527 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 10:41:56.254578 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 10:41:56.254727 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 10:41:56.275452 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 10:41:56.459732 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 10:41:56.459849 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 10:41:56.459889 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 10:41:56.459908 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1124 10:41:56.459954 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 10:41:56.459977 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 10:41:56.459998 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 10:41:56.460043 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 10:41:56.460046 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 10:41:56.460081 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 10:41:56.460093 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 10:41:56.461635 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 10:41:56.461760 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 10:41:56.487545 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 10:41:56.487588 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1124 10:41:56.492369 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 10:41:56.492459 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	W1124 10:41:56.522157 1986432 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1124 10:41:56.522344 1986432 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 10:41:56.575621 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 10:41:56.575665 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1124 10:41:56.575853 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0
	I1124 10:41:56.575930 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 10:41:56.575977 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 10:41:56.575990 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1124 10:41:56.576045 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 10:41:56.576053 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1124 10:41:56.576114 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 10:41:56.576126 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1124 10:41:56.704538 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 10:41:56.704618 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (21895168 bytes)
	I1124 10:41:56.704697 1986432 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1124 10:41:56.704749 1986432 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 10:41:56.704810 1986432 ssh_runner.go:195] Run: which crictl
	I1124 10:41:56.729442 1986432 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 10:41:56.729559 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1124 10:41:56.812156 1986432 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 10:41:57.363542 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1124 10:41:57.363619 1986432 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 10:41:57.363688 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 10:41:57.363781 1986432 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 10:41:57.363875 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 10:41:59.727133 1986432 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.363402546s)
	I1124 10:41:59.727159 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 10:41:59.727177 1986432 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 10:41:59.727242 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1124 10:41:59.727314 1986432 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.363408233s)
	I1124 10:41:59.727331 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 10:41:59.727345 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1124 10:42:01.923561 1986432 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (2.196298685s)
	I1124 10:42:01.923590 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 10:42:01.923608 1986432 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 10:42:01.923665 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 10:42:03.678906 1986432 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.755214516s)
	I1124 10:42:03.678932 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 10:42:03.678950 1986432 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 10:42:03.678998 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 10:42:06.077325 1986432 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (2.398306476s)
	I1124 10:42:06.077351 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 10:42:06.077373 1986432 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 10:42:06.077423 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 10:42:07.525220 1986432 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.447770545s)
	I1124 10:42:07.525248 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 10:42:07.525273 1986432 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 10:42:07.525324 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0
	I1124 10:42:10.457001 1986432 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.24-0: (2.931650631s)
	I1124 10:42:10.457026 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 10:42:10.457045 1986432 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 10:42:10.457093 1986432 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 10:42:11.154162 1986432 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 10:42:11.154197 1986432 cache_images.go:125] Successfully loaded all cached images
	I1124 10:42:11.154203 1986432 cache_images.go:94] duration metric: took 15.986124068s to LoadCachedImages
	I1124 10:42:11.154215 1986432 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1124 10:42:11.154331 1986432 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-306449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-306449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 10:42:11.154419 1986432 ssh_runner.go:195] Run: crio config
	I1124 10:42:11.276715 1986432 cni.go:84] Creating CNI manager for ""
	I1124 10:42:11.276782 1986432 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:42:11.276816 1986432 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 10:42:11.276870 1986432 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-306449 NodeName:kubernetes-upgrade-306449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 10:42:11.277030 1986432 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-306449"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 10:42:11.277146 1986432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 10:42:11.287410 1986432 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 10:42:11.287519 1986432 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 10:42:11.297389 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1124 10:42:11.297542 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 10:42:11.297672 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1124 10:42:11.297727 1986432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:42:11.297848 1986432 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1124 10:42:11.297926 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 10:42:11.331066 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 10:42:11.331344 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1124 10:42:11.331159 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 10:42:11.331472 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1124 10:42:11.331323 1986432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 10:42:11.361162 1986432 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 10:42:11.361242 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1124 10:42:12.321727 1986432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 10:42:12.331999 1986432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1124 10:42:12.348892 1986432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 10:42:12.365008 1986432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1124 10:42:12.380843 1986432 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 10:42:12.386333 1986432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 10:42:12.401912 1986432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:42:12.534800 1986432 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 10:42:12.553986 1986432 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449 for IP: 192.168.85.2
	I1124 10:42:12.554010 1986432 certs.go:195] generating shared ca certs ...
	I1124 10:42:12.554033 1986432 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:42:12.554249 1986432 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 10:42:12.554324 1986432 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 10:42:12.554340 1986432 certs.go:257] generating profile certs ...
	I1124 10:42:12.554470 1986432 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/client.key
	I1124 10:42:12.554571 1986432 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/apiserver.key.5a2f218f
	I1124 10:42:12.554655 1986432 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/proxy-client.key
	I1124 10:42:12.554849 1986432 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 10:42:12.554916 1986432 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 10:42:12.554939 1986432 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 10:42:12.554998 1986432 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 10:42:12.555050 1986432 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 10:42:12.555082 1986432 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 10:42:12.555152 1986432 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:42:12.556186 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 10:42:12.581942 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 10:42:12.606654 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 10:42:12.632323 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 10:42:12.654752 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1124 10:42:12.674318 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 10:42:12.697556 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 10:42:12.718360 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 10:42:12.740181 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 10:42:12.764326 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 10:42:12.784588 1986432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 10:42:12.804821 1986432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 10:42:12.820733 1986432 ssh_runner.go:195] Run: openssl version
	I1124 10:42:12.831619 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 10:42:12.841798 1986432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 10:42:12.847742 1986432 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 10:42:12.847893 1986432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 10:42:12.896361 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 10:42:12.906381 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 10:42:12.916670 1986432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:42:12.921840 1986432 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:42:12.921946 1986432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:42:12.968651 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 10:42:12.978533 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 10:42:12.988764 1986432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 10:42:12.993851 1986432 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 10:42:12.993959 1986432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 10:42:13.038350 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 10:42:13.047320 1986432 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 10:42:13.052315 1986432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 10:42:13.097154 1986432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 10:42:13.141289 1986432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 10:42:13.185460 1986432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 10:42:13.234066 1986432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 10:42:13.283163 1986432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 10:42:13.336191 1986432 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-306449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-306449 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:42:13.336288 1986432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 10:42:13.336361 1986432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 10:42:13.377528 1986432 cri.go:89] found id: ""
	I1124 10:42:13.377597 1986432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 10:42:13.387443 1986432 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 10:42:13.387460 1986432 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 10:42:13.387516 1986432 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 10:42:13.397088 1986432 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 10:42:13.397611 1986432 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-306449" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:42:13.397713 1986432 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-1804834/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-306449" cluster setting kubeconfig missing "kubernetes-upgrade-306449" context setting]
	I1124 10:42:13.397983 1986432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:42:13.398509 1986432 kapi.go:59] client config for kubernetes-upgrade-306449: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/kubernetes-upgrade-306449/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 10:42:13.398995 1986432 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 10:42:13.399039 1986432 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 10:42:13.399044 1986432 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 10:42:13.399048 1986432 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 10:42:13.399052 1986432 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 10:42:13.399474 1986432 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 10:42:13.415058 1986432 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 10:41:25.321935109 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 10:42:12.374800026 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-306449"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1124 10:42:13.415081 1986432 kubeadm.go:1161] stopping kube-system containers ...
	I1124 10:42:13.415094 1986432 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 10:42:13.415153 1986432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 10:42:13.465615 1986432 cri.go:89] found id: ""
	I1124 10:42:13.465720 1986432 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 10:42:13.504807 1986432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:42:13.515110 1986432 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Nov 24 10:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov 24 10:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Nov 24 10:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov 24 10:41 /etc/kubernetes/scheduler.conf
	
	I1124 10:42:13.515187 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 10:42:13.526278 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 10:42:13.537087 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 10:42:13.549297 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 10:42:13.549384 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:42:13.560215 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 10:42:13.572390 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 10:42:13.572477 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:42:13.581581 1986432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 10:42:13.590979 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 10:42:13.658983 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 10:42:14.771478 1986432 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.112457181s)
	I1124 10:42:14.771582 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 10:42:15.031204 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 10:42:15.135852 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 10:42:15.213279 1986432 api_server.go:52] waiting for apiserver process to appear ...
	I1124 10:42:15.213393 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:15.713830 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:16.214141 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:16.713545 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:17.214077 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:17.713567 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:18.214293 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:18.713553 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:19.213567 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:19.714296 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:20.213529 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:20.714258 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:21.214246 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:21.714017 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:22.214202 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:22.714338 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:23.213501 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:23.713966 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:24.213501 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:24.714107 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:25.213632 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:25.714200 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:26.214395 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:26.713559 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:27.213497 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:27.714199 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:28.213470 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:28.714263 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:29.213491 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:29.713497 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:30.213458 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:30.713461 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:31.213467 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:31.713501 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:32.213924 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:32.713522 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:33.213496 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:33.714302 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:34.213566 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:34.714378 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:35.214302 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:35.713714 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:36.214288 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:36.714025 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:37.217733 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:37.714302 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:38.214314 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:38.714285 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:39.214149 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:39.714386 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:40.214588 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:40.714038 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:41.213507 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:41.713637 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:42.214115 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:42.714447 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:43.213728 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:43.714319 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:44.214323 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:44.714227 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:45.214495 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:45.713830 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:46.214259 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:46.713528 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:47.213515 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:47.713545 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:48.214344 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:48.714210 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:49.213575 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:49.714213 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:50.213542 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:50.713912 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:51.213470 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:51.714244 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:52.213584 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:52.713501 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:53.214319 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:53.714056 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:54.214294 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:54.714281 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:55.214296 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:55.714339 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:56.214355 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:56.714240 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:57.213902 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:57.713719 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:58.214221 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:58.714360 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:59.214288 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:42:59.713949 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:00.215306 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:00.713527 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:01.214244 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:01.713682 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:02.214532 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:02.717276 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:03.214369 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:03.714172 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:04.214326 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:04.713525 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:05.213538 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:05.714287 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:06.214307 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:06.713515 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:07.213683 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:07.714089 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:08.214197 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:08.714318 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:09.214062 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:09.714225 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:10.213515 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:10.714307 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:11.214243 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:11.715985 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:12.214324 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:12.714266 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:13.214352 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:13.714426 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:14.213532 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:14.714349 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:15.214062 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:15.214140 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:15.252435 1986432 cri.go:89] found id: ""
	I1124 10:43:15.252469 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.252478 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:15.252486 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:15.252546 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:15.288761 1986432 cri.go:89] found id: ""
	I1124 10:43:15.288785 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.288794 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:15.288800 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:15.288858 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:15.321827 1986432 cri.go:89] found id: ""
	I1124 10:43:15.321852 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.321861 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:15.321868 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:15.321924 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:15.355297 1986432 cri.go:89] found id: ""
	I1124 10:43:15.355322 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.355331 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:15.355338 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:15.355399 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:15.395116 1986432 cri.go:89] found id: ""
	I1124 10:43:15.395142 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.395151 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:15.395158 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:15.395220 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:15.424940 1986432 cri.go:89] found id: ""
	I1124 10:43:15.424967 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.424976 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:15.424983 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:15.425041 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:15.464254 1986432 cri.go:89] found id: ""
	I1124 10:43:15.464280 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.464289 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:15.464299 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:15.464355 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:15.549784 1986432 cri.go:89] found id: ""
	I1124 10:43:15.549809 1986432 logs.go:282] 0 containers: []
	W1124 10:43:15.549818 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:15.549826 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:15.549838 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:15.633152 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:15.633239 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:15.653870 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:15.654022 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:15.756887 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:15.756959 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:15.756987 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:15.802820 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:15.802900 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:18.346421 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:18.357664 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:18.357734 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:18.384953 1986432 cri.go:89] found id: ""
	I1124 10:43:18.384974 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.384983 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:18.384989 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:18.385046 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:18.413153 1986432 cri.go:89] found id: ""
	I1124 10:43:18.413174 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.413184 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:18.413190 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:18.413249 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:18.447339 1986432 cri.go:89] found id: ""
	I1124 10:43:18.447361 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.447369 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:18.447375 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:18.447434 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:18.503433 1986432 cri.go:89] found id: ""
	I1124 10:43:18.503455 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.503464 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:18.503470 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:18.503532 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:18.563537 1986432 cri.go:89] found id: ""
	I1124 10:43:18.563560 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.563570 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:18.563576 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:18.563646 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:18.597904 1986432 cri.go:89] found id: ""
	I1124 10:43:18.597924 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.597933 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:18.597942 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:18.598000 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:18.630167 1986432 cri.go:89] found id: ""
	I1124 10:43:18.630189 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.630197 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:18.630203 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:18.630261 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:18.663332 1986432 cri.go:89] found id: ""
	I1124 10:43:18.663353 1986432 logs.go:282] 0 containers: []
	W1124 10:43:18.663362 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:18.663406 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:18.663419 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:18.698272 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:18.698350 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:18.774564 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:18.774648 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:18.791690 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:18.791717 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:18.876872 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:18.876941 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:18.876984 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:21.425227 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:21.437472 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:21.437538 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:21.487785 1986432 cri.go:89] found id: ""
	I1124 10:43:21.487806 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.487815 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:21.487822 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:21.487880 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:21.535678 1986432 cri.go:89] found id: ""
	I1124 10:43:21.535704 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.535713 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:21.535720 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:21.535783 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:21.578369 1986432 cri.go:89] found id: ""
	I1124 10:43:21.578395 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.578405 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:21.578411 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:21.578472 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:21.624891 1986432 cri.go:89] found id: ""
	I1124 10:43:21.624915 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.624923 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:21.624930 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:21.624987 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:21.662105 1986432 cri.go:89] found id: ""
	I1124 10:43:21.662129 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.662137 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:21.662144 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:21.662202 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:21.689280 1986432 cri.go:89] found id: ""
	I1124 10:43:21.689301 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.689309 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:21.689321 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:21.689378 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:21.723589 1986432 cri.go:89] found id: ""
	I1124 10:43:21.723611 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.723620 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:21.723627 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:21.723716 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:21.751630 1986432 cri.go:89] found id: ""
	I1124 10:43:21.751716 1986432 logs.go:282] 0 containers: []
	W1124 10:43:21.751739 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:21.751763 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:21.751799 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:21.825902 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:21.825986 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:21.849871 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:21.850035 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:21.953849 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:21.953936 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:21.953977 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:22.008213 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:22.008316 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:24.553271 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:24.565791 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:24.565862 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:24.614310 1986432 cri.go:89] found id: ""
	I1124 10:43:24.614333 1986432 logs.go:282] 0 containers: []
	W1124 10:43:24.614342 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:24.614349 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:24.614418 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:24.671321 1986432 cri.go:89] found id: ""
	I1124 10:43:24.671344 1986432 logs.go:282] 0 containers: []
	W1124 10:43:24.671352 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:24.671358 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:24.671417 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:24.730684 1986432 cri.go:89] found id: ""
	I1124 10:43:24.730705 1986432 logs.go:282] 0 containers: []
	W1124 10:43:24.730714 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:24.730721 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:24.730781 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:24.808971 1986432 cri.go:89] found id: ""
	I1124 10:43:24.808993 1986432 logs.go:282] 0 containers: []
	W1124 10:43:24.809002 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:24.809009 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:24.809071 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:24.868004 1986432 cri.go:89] found id: ""
	I1124 10:43:24.868027 1986432 logs.go:282] 0 containers: []
	W1124 10:43:24.868036 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:24.868043 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:24.868105 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:24.922526 1986432 cri.go:89] found id: ""
	I1124 10:43:24.922561 1986432 logs.go:282] 0 containers: []
	W1124 10:43:24.922570 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:24.922577 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:24.922636 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:24.972640 1986432 cri.go:89] found id: ""
	I1124 10:43:24.972662 1986432 logs.go:282] 0 containers: []
	W1124 10:43:24.972671 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:24.972679 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:24.972740 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:25.019032 1986432 cri.go:89] found id: ""
	I1124 10:43:25.019109 1986432 logs.go:282] 0 containers: []
	W1124 10:43:25.019132 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:25.019157 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:25.019197 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:25.068271 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:25.068351 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:25.141573 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:25.141599 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:25.246501 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:25.246590 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:25.276325 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:25.276401 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:25.406750 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:27.907031 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:27.918456 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:27.918534 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:27.984642 1986432 cri.go:89] found id: ""
	I1124 10:43:27.984670 1986432 logs.go:282] 0 containers: []
	W1124 10:43:27.984679 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:27.984692 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:27.984754 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:28.043825 1986432 cri.go:89] found id: ""
	I1124 10:43:28.043853 1986432 logs.go:282] 0 containers: []
	W1124 10:43:28.043862 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:28.043869 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:28.043930 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:28.091525 1986432 cri.go:89] found id: ""
	I1124 10:43:28.091553 1986432 logs.go:282] 0 containers: []
	W1124 10:43:28.091562 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:28.091568 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:28.091627 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:28.130468 1986432 cri.go:89] found id: ""
	I1124 10:43:28.130496 1986432 logs.go:282] 0 containers: []
	W1124 10:43:28.130505 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:28.130513 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:28.130573 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:28.170798 1986432 cri.go:89] found id: ""
	I1124 10:43:28.170825 1986432 logs.go:282] 0 containers: []
	W1124 10:43:28.170835 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:28.170841 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:28.170900 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:28.207449 1986432 cri.go:89] found id: ""
	I1124 10:43:28.207477 1986432 logs.go:282] 0 containers: []
	W1124 10:43:28.207486 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:28.207493 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:28.207553 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:28.247457 1986432 cri.go:89] found id: ""
	I1124 10:43:28.247483 1986432 logs.go:282] 0 containers: []
	W1124 10:43:28.247492 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:28.247499 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:28.247563 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:28.296216 1986432 cri.go:89] found id: ""
	I1124 10:43:28.296292 1986432 logs.go:282] 0 containers: []
	W1124 10:43:28.296316 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:28.296338 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:28.296372 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:28.341740 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:28.341820 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:28.383952 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:28.383978 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:28.482569 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:28.482658 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:28.501638 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:28.501720 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:28.627126 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:31.127405 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:31.140029 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:31.140103 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:31.168979 1986432 cri.go:89] found id: ""
	I1124 10:43:31.169016 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.169030 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:31.169040 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:31.169167 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:31.200768 1986432 cri.go:89] found id: ""
	I1124 10:43:31.200792 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.200801 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:31.200807 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:31.200868 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:31.227116 1986432 cri.go:89] found id: ""
	I1124 10:43:31.227141 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.227151 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:31.227158 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:31.227219 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:31.254551 1986432 cri.go:89] found id: ""
	I1124 10:43:31.254575 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.254584 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:31.254591 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:31.254652 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:31.280861 1986432 cri.go:89] found id: ""
	I1124 10:43:31.280885 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.280895 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:31.280904 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:31.280975 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:31.310041 1986432 cri.go:89] found id: ""
	I1124 10:43:31.310064 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.310073 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:31.310079 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:31.310158 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:31.343559 1986432 cri.go:89] found id: ""
	I1124 10:43:31.343581 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.343589 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:31.343596 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:31.343656 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:31.372743 1986432 cri.go:89] found id: ""
	I1124 10:43:31.372765 1986432 logs.go:282] 0 containers: []
	W1124 10:43:31.372773 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:31.372782 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:31.372794 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:31.443689 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:31.443726 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:31.461433 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:31.461464 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:31.538054 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:31.538087 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:31.538101 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:31.578510 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:31.578592 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:34.123832 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:34.135645 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:34.135745 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:34.179148 1986432 cri.go:89] found id: ""
	I1124 10:43:34.179176 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.179185 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:34.179192 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:34.179258 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:34.227399 1986432 cri.go:89] found id: ""
	I1124 10:43:34.227434 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.227443 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:34.227460 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:34.227552 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:34.276575 1986432 cri.go:89] found id: ""
	I1124 10:43:34.276602 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.276611 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:34.276618 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:34.276687 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:34.321189 1986432 cri.go:89] found id: ""
	I1124 10:43:34.321222 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.321238 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:34.321245 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:34.321364 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:34.355463 1986432 cri.go:89] found id: ""
	I1124 10:43:34.355496 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.355505 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:34.355516 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:34.355598 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:34.393166 1986432 cri.go:89] found id: ""
	I1124 10:43:34.393193 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.393208 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:34.393215 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:34.393284 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:34.449319 1986432 cri.go:89] found id: ""
	I1124 10:43:34.449358 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.449369 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:34.449375 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:34.449444 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:34.488044 1986432 cri.go:89] found id: ""
	I1124 10:43:34.488066 1986432 logs.go:282] 0 containers: []
	W1124 10:43:34.488074 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:34.488083 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:34.488096 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:34.508791 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:34.508879 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:34.649767 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:34.649842 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:34.649880 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:34.717134 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:34.717168 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:34.817496 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:34.817521 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:37.434524 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:37.447913 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:37.447986 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:37.482185 1986432 cri.go:89] found id: ""
	I1124 10:43:37.482209 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.482219 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:37.482227 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:37.482287 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:37.510335 1986432 cri.go:89] found id: ""
	I1124 10:43:37.510358 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.510367 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:37.510373 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:37.510436 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:37.543572 1986432 cri.go:89] found id: ""
	I1124 10:43:37.543592 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.543600 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:37.543606 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:37.543661 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:37.573540 1986432 cri.go:89] found id: ""
	I1124 10:43:37.573560 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.573569 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:37.573575 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:37.573636 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:37.607659 1986432 cri.go:89] found id: ""
	I1124 10:43:37.607683 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.607691 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:37.607698 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:37.607768 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:37.645777 1986432 cri.go:89] found id: ""
	I1124 10:43:37.645799 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.645808 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:37.645814 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:37.645879 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:37.687279 1986432 cri.go:89] found id: ""
	I1124 10:43:37.687301 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.687309 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:37.687316 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:37.687376 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:37.790481 1986432 cri.go:89] found id: ""
	I1124 10:43:37.790506 1986432 logs.go:282] 0 containers: []
	W1124 10:43:37.790516 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:37.790525 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:37.790538 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:37.936910 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:37.936995 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:37.968001 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:37.968072 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:38.106838 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:38.106897 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:38.106929 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:38.168230 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:38.168264 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:40.728925 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:40.747715 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:40.747799 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:40.828161 1986432 cri.go:89] found id: ""
	I1124 10:43:40.828183 1986432 logs.go:282] 0 containers: []
	W1124 10:43:40.828192 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:40.828198 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:40.828273 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:40.865732 1986432 cri.go:89] found id: ""
	I1124 10:43:40.865757 1986432 logs.go:282] 0 containers: []
	W1124 10:43:40.865766 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:40.865772 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:40.865830 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:40.904155 1986432 cri.go:89] found id: ""
	I1124 10:43:40.904182 1986432 logs.go:282] 0 containers: []
	W1124 10:43:40.904191 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:40.904197 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:40.904256 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:40.933714 1986432 cri.go:89] found id: ""
	I1124 10:43:40.933737 1986432 logs.go:282] 0 containers: []
	W1124 10:43:40.933745 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:40.933752 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:40.933809 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:40.977957 1986432 cri.go:89] found id: ""
	I1124 10:43:40.977984 1986432 logs.go:282] 0 containers: []
	W1124 10:43:40.977992 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:40.977998 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:40.978054 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:41.047427 1986432 cri.go:89] found id: ""
	I1124 10:43:41.047454 1986432 logs.go:282] 0 containers: []
	W1124 10:43:41.047464 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:41.047471 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:41.047581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:41.085891 1986432 cri.go:89] found id: ""
	I1124 10:43:41.085919 1986432 logs.go:282] 0 containers: []
	W1124 10:43:41.085928 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:41.085935 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:41.085995 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:41.122212 1986432 cri.go:89] found id: ""
	I1124 10:43:41.122234 1986432 logs.go:282] 0 containers: []
	W1124 10:43:41.122242 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:41.122251 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:41.122265 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:41.204347 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:41.204386 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:41.230455 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:41.230487 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:41.333793 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:41.333823 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:41.333837 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:41.385442 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:41.385497 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:43.926750 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:43.937238 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:43.937308 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:43.967672 1986432 cri.go:89] found id: ""
	I1124 10:43:43.967695 1986432 logs.go:282] 0 containers: []
	W1124 10:43:43.967705 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:43.967711 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:43.967780 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:43.996398 1986432 cri.go:89] found id: ""
	I1124 10:43:43.996421 1986432 logs.go:282] 0 containers: []
	W1124 10:43:43.996429 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:43.996435 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:43.996495 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:44.032106 1986432 cri.go:89] found id: ""
	I1124 10:43:44.032133 1986432 logs.go:282] 0 containers: []
	W1124 10:43:44.032142 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:44.032149 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:44.032208 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:44.063145 1986432 cri.go:89] found id: ""
	I1124 10:43:44.063170 1986432 logs.go:282] 0 containers: []
	W1124 10:43:44.063180 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:44.063188 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:44.063248 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:44.093446 1986432 cri.go:89] found id: ""
	I1124 10:43:44.093478 1986432 logs.go:282] 0 containers: []
	W1124 10:43:44.093489 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:44.093512 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:44.093589 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:44.119228 1986432 cri.go:89] found id: ""
	I1124 10:43:44.119251 1986432 logs.go:282] 0 containers: []
	W1124 10:43:44.119260 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:44.119267 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:44.119323 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:44.151719 1986432 cri.go:89] found id: ""
	I1124 10:43:44.151805 1986432 logs.go:282] 0 containers: []
	W1124 10:43:44.151830 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:44.151850 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:44.151937 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:44.190656 1986432 cri.go:89] found id: ""
	I1124 10:43:44.190728 1986432 logs.go:282] 0 containers: []
	W1124 10:43:44.190754 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:44.190777 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:44.190816 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:44.217993 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:44.218072 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:44.346722 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:44.346747 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:44.346760 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:44.406772 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:44.406851 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:44.446066 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:44.446148 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:47.036584 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:47.056342 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:47.056408 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:47.121741 1986432 cri.go:89] found id: ""
	I1124 10:43:47.121762 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.121770 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:47.121777 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:47.121834 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:47.172741 1986432 cri.go:89] found id: ""
	I1124 10:43:47.172762 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.172771 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:47.172778 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:47.172835 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:47.216390 1986432 cri.go:89] found id: ""
	I1124 10:43:47.216412 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.216420 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:47.216427 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:47.216490 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:47.253406 1986432 cri.go:89] found id: ""
	I1124 10:43:47.253426 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.253435 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:47.253441 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:47.253495 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:47.299310 1986432 cri.go:89] found id: ""
	I1124 10:43:47.299332 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.299341 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:47.299347 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:47.299406 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:47.341339 1986432 cri.go:89] found id: ""
	I1124 10:43:47.341361 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.341369 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:47.341376 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:47.341431 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:47.391668 1986432 cri.go:89] found id: ""
	I1124 10:43:47.391689 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.391697 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:47.391704 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:47.391765 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:47.439691 1986432 cri.go:89] found id: ""
	I1124 10:43:47.439754 1986432 logs.go:282] 0 containers: []
	W1124 10:43:47.439787 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:47.439810 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:47.439844 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:47.463944 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:47.464016 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:47.553211 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:47.553285 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:47.553315 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:47.600786 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:47.600863 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:47.633014 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:47.633080 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:50.215134 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:50.227487 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:50.227567 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:50.259967 1986432 cri.go:89] found id: ""
	I1124 10:43:50.259993 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.260003 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:50.260009 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:50.260069 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:50.290612 1986432 cri.go:89] found id: ""
	I1124 10:43:50.290637 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.290647 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:50.290653 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:50.290713 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:50.320043 1986432 cri.go:89] found id: ""
	I1124 10:43:50.320112 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.320135 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:50.320156 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:50.320246 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:50.350342 1986432 cri.go:89] found id: ""
	I1124 10:43:50.350365 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.350374 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:50.350381 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:50.350442 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:50.378806 1986432 cri.go:89] found id: ""
	I1124 10:43:50.378874 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.378891 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:50.378899 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:50.378961 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:50.405578 1986432 cri.go:89] found id: ""
	I1124 10:43:50.405605 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.405625 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:50.405632 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:50.405696 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:50.433638 1986432 cri.go:89] found id: ""
	I1124 10:43:50.433668 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.433678 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:50.433685 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:50.433745 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:50.461231 1986432 cri.go:89] found id: ""
	I1124 10:43:50.461255 1986432 logs.go:282] 0 containers: []
	W1124 10:43:50.461263 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:50.461273 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:50.461287 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:50.536841 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:50.536880 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:50.554336 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:50.554369 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:50.620903 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:50.620925 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:50.620937 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:50.662176 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:50.662211 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:53.194485 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:53.206701 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:53.206773 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:53.246129 1986432 cri.go:89] found id: ""
	I1124 10:43:53.246152 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.246162 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:53.246168 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:53.246227 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:53.277202 1986432 cri.go:89] found id: ""
	I1124 10:43:53.277225 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.277234 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:53.277240 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:53.277297 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:53.316001 1986432 cri.go:89] found id: ""
	I1124 10:43:53.316024 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.316033 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:53.316039 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:53.316108 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:53.365123 1986432 cri.go:89] found id: ""
	I1124 10:43:53.365146 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.365156 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:53.365167 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:53.365229 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:53.411315 1986432 cri.go:89] found id: ""
	I1124 10:43:53.411342 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.411351 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:53.411357 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:53.411414 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:53.471448 1986432 cri.go:89] found id: ""
	I1124 10:43:53.471470 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.471479 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:53.471485 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:53.471543 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:53.524841 1986432 cri.go:89] found id: ""
	I1124 10:43:53.524864 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.524874 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:53.524881 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:53.524940 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:53.572449 1986432 cri.go:89] found id: ""
	I1124 10:43:53.572477 1986432 logs.go:282] 0 containers: []
	W1124 10:43:53.572486 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:53.572495 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:53.572506 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:53.675742 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:53.675840 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:53.706876 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:53.707269 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:53.839535 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:53.839609 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:53.839638 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:53.909757 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:53.909796 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:56.443030 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:56.453951 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:56.454026 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:56.484433 1986432 cri.go:89] found id: ""
	I1124 10:43:56.484456 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.484465 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:56.484472 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:56.484533 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:56.519648 1986432 cri.go:89] found id: ""
	I1124 10:43:56.519671 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.519679 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:56.519686 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:56.519747 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:56.551537 1986432 cri.go:89] found id: ""
	I1124 10:43:56.551563 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.551572 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:56.551579 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:56.551642 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:56.581943 1986432 cri.go:89] found id: ""
	I1124 10:43:56.581983 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.581995 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:56.582003 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:56.582063 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:56.627918 1986432 cri.go:89] found id: ""
	I1124 10:43:56.627938 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.627947 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:56.627954 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:56.628018 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:56.659671 1986432 cri.go:89] found id: ""
	I1124 10:43:56.659693 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.659703 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:56.659710 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:56.659767 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:56.687257 1986432 cri.go:89] found id: ""
	I1124 10:43:56.687280 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.687294 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:56.687302 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:56.687361 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:56.729273 1986432 cri.go:89] found id: ""
	I1124 10:43:56.729296 1986432 logs.go:282] 0 containers: []
	W1124 10:43:56.729306 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:56.729315 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:56.729327 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:56.830735 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:56.830772 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:56.851003 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:56.851035 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:56.923875 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:56.923897 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:56.923911 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:43:56.966393 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:43:56.966430 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:43:59.516340 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:43:59.526761 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:43:59.526867 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:43:59.555078 1986432 cri.go:89] found id: ""
	I1124 10:43:59.555102 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.555111 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:43:59.555119 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:43:59.555177 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:43:59.583287 1986432 cri.go:89] found id: ""
	I1124 10:43:59.583312 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.583321 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:43:59.583327 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:43:59.583385 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:43:59.610465 1986432 cri.go:89] found id: ""
	I1124 10:43:59.610489 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.610497 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:43:59.610504 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:43:59.610562 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:43:59.637896 1986432 cri.go:89] found id: ""
	I1124 10:43:59.637935 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.637945 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:43:59.637952 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:43:59.638011 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:43:59.665297 1986432 cri.go:89] found id: ""
	I1124 10:43:59.665376 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.665400 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:43:59.665421 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:43:59.665512 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:43:59.696498 1986432 cri.go:89] found id: ""
	I1124 10:43:59.696519 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.696528 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:43:59.696535 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:43:59.696593 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:43:59.748294 1986432 cri.go:89] found id: ""
	I1124 10:43:59.748372 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.748396 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:43:59.748419 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:43:59.748529 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:43:59.798664 1986432 cri.go:89] found id: ""
	I1124 10:43:59.798737 1986432 logs.go:282] 0 containers: []
	W1124 10:43:59.798760 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:43:59.798784 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:43:59.798830 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:43:59.889399 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:43:59.889481 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:43:59.907621 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:43:59.907694 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:43:59.981689 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:43:59.981752 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:43:59.981780 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:00.028330 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:00.028374 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:02.688018 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:02.698676 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:02.698755 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:02.740947 1986432 cri.go:89] found id: ""
	I1124 10:44:02.740972 1986432 logs.go:282] 0 containers: []
	W1124 10:44:02.740980 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:02.740987 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:02.741045 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:02.778876 1986432 cri.go:89] found id: ""
	I1124 10:44:02.778898 1986432 logs.go:282] 0 containers: []
	W1124 10:44:02.778907 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:02.778914 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:02.778972 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:02.813710 1986432 cri.go:89] found id: ""
	I1124 10:44:02.813735 1986432 logs.go:282] 0 containers: []
	W1124 10:44:02.813744 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:02.813751 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:02.813812 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:02.851111 1986432 cri.go:89] found id: ""
	I1124 10:44:02.851141 1986432 logs.go:282] 0 containers: []
	W1124 10:44:02.851150 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:02.851158 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:02.851222 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:02.899628 1986432 cri.go:89] found id: ""
	I1124 10:44:02.899673 1986432 logs.go:282] 0 containers: []
	W1124 10:44:02.899681 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:02.899695 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:02.899837 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:02.933180 1986432 cri.go:89] found id: ""
	I1124 10:44:02.933202 1986432 logs.go:282] 0 containers: []
	W1124 10:44:02.933211 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:02.933218 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:02.933279 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:02.984054 1986432 cri.go:89] found id: ""
	I1124 10:44:02.984077 1986432 logs.go:282] 0 containers: []
	W1124 10:44:02.984085 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:02.984092 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:02.984156 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:03.032189 1986432 cri.go:89] found id: ""
	I1124 10:44:03.032211 1986432 logs.go:282] 0 containers: []
	W1124 10:44:03.032220 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:03.032229 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:03.032241 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:03.126473 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:03.126555 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:03.145973 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:03.146078 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:03.259607 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:03.259624 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:03.259637 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:03.336049 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:03.336142 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:05.898658 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:05.909495 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:05.909571 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:05.935470 1986432 cri.go:89] found id: ""
	I1124 10:44:05.935495 1986432 logs.go:282] 0 containers: []
	W1124 10:44:05.935505 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:05.935512 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:05.935574 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:05.962355 1986432 cri.go:89] found id: ""
	I1124 10:44:05.962379 1986432 logs.go:282] 0 containers: []
	W1124 10:44:05.962388 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:05.962395 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:05.962456 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:05.988749 1986432 cri.go:89] found id: ""
	I1124 10:44:05.988772 1986432 logs.go:282] 0 containers: []
	W1124 10:44:05.988782 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:05.988789 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:05.988854 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:06.019840 1986432 cri.go:89] found id: ""
	I1124 10:44:06.019915 1986432 logs.go:282] 0 containers: []
	W1124 10:44:06.019932 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:06.019941 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:06.020010 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:06.051655 1986432 cri.go:89] found id: ""
	I1124 10:44:06.051678 1986432 logs.go:282] 0 containers: []
	W1124 10:44:06.051687 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:06.051694 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:06.051760 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:06.082974 1986432 cri.go:89] found id: ""
	I1124 10:44:06.082997 1986432 logs.go:282] 0 containers: []
	W1124 10:44:06.083006 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:06.083014 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:06.083077 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:06.115989 1986432 cri.go:89] found id: ""
	I1124 10:44:06.116011 1986432 logs.go:282] 0 containers: []
	W1124 10:44:06.116020 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:06.116026 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:06.116134 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:06.142344 1986432 cri.go:89] found id: ""
	I1124 10:44:06.142371 1986432 logs.go:282] 0 containers: []
	W1124 10:44:06.142381 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:06.142391 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:06.142403 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:06.211974 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:06.212012 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:06.229688 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:06.229721 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:06.295818 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:06.295840 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:06.295855 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:06.341917 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:06.341959 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:08.873481 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:08.885750 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:08.885818 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:08.928253 1986432 cri.go:89] found id: ""
	I1124 10:44:08.928274 1986432 logs.go:282] 0 containers: []
	W1124 10:44:08.928283 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:08.928291 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:08.928382 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:08.970232 1986432 cri.go:89] found id: ""
	I1124 10:44:08.970253 1986432 logs.go:282] 0 containers: []
	W1124 10:44:08.970261 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:08.970268 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:08.970327 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:08.998151 1986432 cri.go:89] found id: ""
	I1124 10:44:08.998173 1986432 logs.go:282] 0 containers: []
	W1124 10:44:08.998182 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:08.998189 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:08.998248 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:09.028321 1986432 cri.go:89] found id: ""
	I1124 10:44:09.028342 1986432 logs.go:282] 0 containers: []
	W1124 10:44:09.028352 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:09.028359 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:09.028421 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:09.063794 1986432 cri.go:89] found id: ""
	I1124 10:44:09.063867 1986432 logs.go:282] 0 containers: []
	W1124 10:44:09.063890 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:09.063898 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:09.063965 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:09.095498 1986432 cri.go:89] found id: ""
	I1124 10:44:09.095519 1986432 logs.go:282] 0 containers: []
	W1124 10:44:09.095528 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:09.095535 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:09.095640 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:09.124508 1986432 cri.go:89] found id: ""
	I1124 10:44:09.124530 1986432 logs.go:282] 0 containers: []
	W1124 10:44:09.124538 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:09.124549 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:09.124606 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:09.157587 1986432 cri.go:89] found id: ""
	I1124 10:44:09.157617 1986432 logs.go:282] 0 containers: []
	W1124 10:44:09.157626 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:09.157635 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:09.157646 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:09.238786 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:09.238856 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:09.258497 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:09.258570 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:09.339657 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:09.339680 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:09.339692 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:09.397054 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:09.397098 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:11.931682 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:11.942682 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:11.942750 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:11.979275 1986432 cri.go:89] found id: ""
	I1124 10:44:11.979301 1986432 logs.go:282] 0 containers: []
	W1124 10:44:11.979310 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:11.979316 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:11.979376 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:12.009459 1986432 cri.go:89] found id: ""
	I1124 10:44:12.009488 1986432 logs.go:282] 0 containers: []
	W1124 10:44:12.009497 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:12.009504 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:12.009565 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:12.048906 1986432 cri.go:89] found id: ""
	I1124 10:44:12.048933 1986432 logs.go:282] 0 containers: []
	W1124 10:44:12.048942 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:12.048949 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:12.049007 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:12.077716 1986432 cri.go:89] found id: ""
	I1124 10:44:12.077738 1986432 logs.go:282] 0 containers: []
	W1124 10:44:12.077746 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:12.077754 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:12.077811 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:12.109795 1986432 cri.go:89] found id: ""
	I1124 10:44:12.109816 1986432 logs.go:282] 0 containers: []
	W1124 10:44:12.109825 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:12.109831 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:12.109891 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:12.143413 1986432 cri.go:89] found id: ""
	I1124 10:44:12.143439 1986432 logs.go:282] 0 containers: []
	W1124 10:44:12.143448 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:12.143455 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:12.143511 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:12.179150 1986432 cri.go:89] found id: ""
	I1124 10:44:12.179172 1986432 logs.go:282] 0 containers: []
	W1124 10:44:12.179181 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:12.179187 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:12.179247 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:12.206588 1986432 cri.go:89] found id: ""
	I1124 10:44:12.206665 1986432 logs.go:282] 0 containers: []
	W1124 10:44:12.206677 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:12.206686 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:12.206701 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:12.283149 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:12.283229 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:12.299743 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:12.299769 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:12.381444 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:12.381466 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:12.381479 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:12.427385 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:12.427424 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:14.994165 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:15.007389 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:15.007471 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:15.054720 1986432 cri.go:89] found id: ""
	I1124 10:44:15.054752 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.054761 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:15.054768 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:15.054833 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:15.095423 1986432 cri.go:89] found id: ""
	I1124 10:44:15.095453 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.095463 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:15.095470 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:15.095533 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:15.134871 1986432 cri.go:89] found id: ""
	I1124 10:44:15.134895 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.134903 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:15.134910 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:15.134974 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:15.171727 1986432 cri.go:89] found id: ""
	I1124 10:44:15.171812 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.171835 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:15.171874 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:15.171989 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:15.199666 1986432 cri.go:89] found id: ""
	I1124 10:44:15.199688 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.199696 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:15.199703 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:15.199764 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:15.235555 1986432 cri.go:89] found id: ""
	I1124 10:44:15.235576 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.235584 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:15.235591 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:15.235648 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:15.271671 1986432 cri.go:89] found id: ""
	I1124 10:44:15.271753 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.271774 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:15.271795 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:15.271923 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:15.308275 1986432 cri.go:89] found id: ""
	I1124 10:44:15.308296 1986432 logs.go:282] 0 containers: []
	W1124 10:44:15.308305 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:15.308314 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:15.308325 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:15.382836 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:15.382918 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:15.400523 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:15.400549 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:15.497944 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:15.498014 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:15.498040 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:15.563098 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:15.563204 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:18.113223 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:18.124078 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:18.124151 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:18.162049 1986432 cri.go:89] found id: ""
	I1124 10:44:18.162072 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.162081 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:18.162088 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:18.162145 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:18.190793 1986432 cri.go:89] found id: ""
	I1124 10:44:18.190815 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.190824 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:18.190831 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:18.190895 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:18.227604 1986432 cri.go:89] found id: ""
	I1124 10:44:18.227628 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.227637 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:18.227644 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:18.227703 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:18.272618 1986432 cri.go:89] found id: ""
	I1124 10:44:18.272640 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.272650 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:18.272657 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:18.272716 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:18.319572 1986432 cri.go:89] found id: ""
	I1124 10:44:18.319595 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.319603 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:18.319610 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:18.319672 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:18.346826 1986432 cri.go:89] found id: ""
	I1124 10:44:18.346847 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.346856 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:18.346863 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:18.346922 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:18.373345 1986432 cri.go:89] found id: ""
	I1124 10:44:18.373366 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.373375 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:18.373382 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:18.373439 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:18.404045 1986432 cri.go:89] found id: ""
	I1124 10:44:18.404068 1986432 logs.go:282] 0 containers: []
	W1124 10:44:18.404078 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:18.404087 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:18.404099 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:18.485453 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:18.485525 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:18.485554 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:18.530797 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:18.530886 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:18.571351 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:18.571374 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:18.642162 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:18.642246 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:21.159448 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:21.170979 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:21.171043 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:21.200137 1986432 cri.go:89] found id: ""
	I1124 10:44:21.200158 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.200167 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:21.200173 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:21.200230 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:21.229064 1986432 cri.go:89] found id: ""
	I1124 10:44:21.229086 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.229095 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:21.229146 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:21.229206 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:21.263683 1986432 cri.go:89] found id: ""
	I1124 10:44:21.263704 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.263713 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:21.263719 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:21.263783 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:21.296428 1986432 cri.go:89] found id: ""
	I1124 10:44:21.296450 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.296458 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:21.296466 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:21.296527 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:21.326371 1986432 cri.go:89] found id: ""
	I1124 10:44:21.326393 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.326402 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:21.326408 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:21.326467 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:21.355707 1986432 cri.go:89] found id: ""
	I1124 10:44:21.355730 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.355738 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:21.355745 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:21.355802 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:21.387658 1986432 cri.go:89] found id: ""
	I1124 10:44:21.387681 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.387690 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:21.387697 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:21.387755 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:21.417549 1986432 cri.go:89] found id: ""
	I1124 10:44:21.417571 1986432 logs.go:282] 0 containers: []
	W1124 10:44:21.417580 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:21.417589 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:21.417601 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:21.509486 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:21.509504 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:21.509516 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:21.556018 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:21.556144 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:21.590105 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:21.590183 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:21.666783 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:21.666865 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:24.183994 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:24.201464 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:24.201536 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:24.246128 1986432 cri.go:89] found id: ""
	I1124 10:44:24.246150 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.246159 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:24.246166 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:24.246226 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:24.295311 1986432 cri.go:89] found id: ""
	I1124 10:44:24.295333 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.295341 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:24.295348 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:24.295413 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:24.335611 1986432 cri.go:89] found id: ""
	I1124 10:44:24.335633 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.335642 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:24.335649 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:24.335710 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:24.383355 1986432 cri.go:89] found id: ""
	I1124 10:44:24.383377 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.383386 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:24.383393 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:24.383453 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:24.435655 1986432 cri.go:89] found id: ""
	I1124 10:44:24.435722 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.435746 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:24.435766 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:24.435857 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:24.487577 1986432 cri.go:89] found id: ""
	I1124 10:44:24.487643 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.487668 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:24.487688 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:24.487778 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:24.546572 1986432 cri.go:89] found id: ""
	I1124 10:44:24.546639 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.546666 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:24.546687 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:24.546774 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:24.587741 1986432 cri.go:89] found id: ""
	I1124 10:44:24.587825 1986432 logs.go:282] 0 containers: []
	W1124 10:44:24.587848 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:24.587871 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:24.587910 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:24.694726 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:24.694769 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:24.728571 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:24.728607 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:24.886306 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:24.886331 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:24.886356 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:24.944663 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:24.944701 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:27.513765 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:27.526341 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:27.526415 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:27.571952 1986432 cri.go:89] found id: ""
	I1124 10:44:27.571989 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.571999 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:27.572006 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:27.572067 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:27.623541 1986432 cri.go:89] found id: ""
	I1124 10:44:27.623568 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.623577 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:27.623585 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:27.623645 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:27.664732 1986432 cri.go:89] found id: ""
	I1124 10:44:27.664759 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.664786 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:27.664792 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:27.664861 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:27.703094 1986432 cri.go:89] found id: ""
	I1124 10:44:27.703121 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.703130 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:27.703140 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:27.703199 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:27.758226 1986432 cri.go:89] found id: ""
	I1124 10:44:27.758253 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.758262 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:27.758269 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:27.758328 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:27.804106 1986432 cri.go:89] found id: ""
	I1124 10:44:27.804132 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.804141 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:27.804148 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:27.804218 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:27.852775 1986432 cri.go:89] found id: ""
	I1124 10:44:27.852801 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.852811 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:27.852818 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:27.852884 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:27.902550 1986432 cri.go:89] found id: ""
	I1124 10:44:27.902576 1986432 logs.go:282] 0 containers: []
	W1124 10:44:27.902586 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:27.902595 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:27.902607 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:28.069350 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:28.069373 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:28.069388 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:28.124776 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:28.124823 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:28.199833 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:28.199865 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:28.292065 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:28.292107 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:30.813243 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:30.826387 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:30.826460 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:30.870972 1986432 cri.go:89] found id: ""
	I1124 10:44:30.871001 1986432 logs.go:282] 0 containers: []
	W1124 10:44:30.871010 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:30.871018 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:30.871076 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:30.918764 1986432 cri.go:89] found id: ""
	I1124 10:44:30.918792 1986432 logs.go:282] 0 containers: []
	W1124 10:44:30.918802 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:30.918809 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:30.918871 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:30.957678 1986432 cri.go:89] found id: ""
	I1124 10:44:30.957704 1986432 logs.go:282] 0 containers: []
	W1124 10:44:30.957714 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:30.957721 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:30.957784 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:31.009371 1986432 cri.go:89] found id: ""
	I1124 10:44:31.009400 1986432 logs.go:282] 0 containers: []
	W1124 10:44:31.009410 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:31.009418 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:31.009480 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:31.065903 1986432 cri.go:89] found id: ""
	I1124 10:44:31.065929 1986432 logs.go:282] 0 containers: []
	W1124 10:44:31.065937 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:31.065944 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:31.066005 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:31.126988 1986432 cri.go:89] found id: ""
	I1124 10:44:31.127013 1986432 logs.go:282] 0 containers: []
	W1124 10:44:31.127022 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:31.127029 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:31.127100 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:31.166993 1986432 cri.go:89] found id: ""
	I1124 10:44:31.167019 1986432 logs.go:282] 0 containers: []
	W1124 10:44:31.167029 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:31.167035 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:31.167094 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:31.201223 1986432 cri.go:89] found id: ""
	I1124 10:44:31.201252 1986432 logs.go:282] 0 containers: []
	W1124 10:44:31.201261 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:31.201271 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:31.201282 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:31.320119 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:31.320141 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:31.320154 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:31.366060 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:31.366095 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:31.420784 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:31.420813 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:31.505068 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:31.505294 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:34.025231 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:34.037722 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:34.037820 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:34.065037 1986432 cri.go:89] found id: ""
	I1124 10:44:34.065063 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.065072 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:34.065079 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:34.065228 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:34.093956 1986432 cri.go:89] found id: ""
	I1124 10:44:34.093985 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.093994 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:34.094001 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:34.094086 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:34.124117 1986432 cri.go:89] found id: ""
	I1124 10:44:34.124144 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.124160 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:34.124167 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:34.124273 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:34.155368 1986432 cri.go:89] found id: ""
	I1124 10:44:34.155395 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.155405 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:34.155413 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:34.155473 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:34.183582 1986432 cri.go:89] found id: ""
	I1124 10:44:34.183606 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.183615 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:34.183621 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:34.183682 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:34.215065 1986432 cri.go:89] found id: ""
	I1124 10:44:34.215092 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.215101 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:34.215107 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:34.215170 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:34.269515 1986432 cri.go:89] found id: ""
	I1124 10:44:34.269542 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.269550 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:34.269558 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:34.269618 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:34.301354 1986432 cri.go:89] found id: ""
	I1124 10:44:34.301381 1986432 logs.go:282] 0 containers: []
	W1124 10:44:34.301391 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:34.301399 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:34.301412 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:34.319663 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:34.319693 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:34.404818 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:34.404837 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:34.404852 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:34.453990 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:34.454031 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:34.491361 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:34.491390 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:37.073286 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:37.083947 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:37.084063 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:37.110744 1986432 cri.go:89] found id: ""
	I1124 10:44:37.110769 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.110778 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:37.110786 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:37.110845 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:37.143992 1986432 cri.go:89] found id: ""
	I1124 10:44:37.144023 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.144033 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:37.144040 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:37.144100 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:37.171177 1986432 cri.go:89] found id: ""
	I1124 10:44:37.171204 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.171212 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:37.171219 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:37.171279 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:37.241401 1986432 cri.go:89] found id: ""
	I1124 10:44:37.241439 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.241455 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:37.241462 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:37.241560 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:37.314742 1986432 cri.go:89] found id: ""
	I1124 10:44:37.314785 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.314796 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:37.314803 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:37.314903 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:37.350091 1986432 cri.go:89] found id: ""
	I1124 10:44:37.350123 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.350132 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:37.350139 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:37.350212 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:37.390730 1986432 cri.go:89] found id: ""
	I1124 10:44:37.390769 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.390789 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:37.390804 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:37.390906 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:37.437815 1986432 cri.go:89] found id: ""
	I1124 10:44:37.437852 1986432 logs.go:282] 0 containers: []
	W1124 10:44:37.437862 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:37.437871 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:37.437895 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:37.463379 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:37.463415 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:37.567817 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:37.567850 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:37.567865 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:37.613580 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:37.613615 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:37.649942 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:37.649969 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:40.222830 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:40.235021 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:40.235095 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:40.265851 1986432 cri.go:89] found id: ""
	I1124 10:44:40.265874 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.265884 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:40.265891 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:40.265952 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:40.296148 1986432 cri.go:89] found id: ""
	I1124 10:44:40.296170 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.296179 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:40.296186 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:40.296251 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:40.323471 1986432 cri.go:89] found id: ""
	I1124 10:44:40.323493 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.323502 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:40.323510 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:40.323573 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:40.353257 1986432 cri.go:89] found id: ""
	I1124 10:44:40.353279 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.353288 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:40.353295 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:40.353354 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:40.384343 1986432 cri.go:89] found id: ""
	I1124 10:44:40.384368 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.384377 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:40.384383 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:40.384440 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:40.411131 1986432 cri.go:89] found id: ""
	I1124 10:44:40.411154 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.411163 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:40.411169 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:40.411228 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:40.440666 1986432 cri.go:89] found id: ""
	I1124 10:44:40.440687 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.440696 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:40.440703 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:40.440761 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:40.470563 1986432 cri.go:89] found id: ""
	I1124 10:44:40.470585 1986432 logs.go:282] 0 containers: []
	W1124 10:44:40.470594 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:40.470606 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:40.470617 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:40.533083 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:40.533170 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:40.533194 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:40.575048 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:40.575083 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:40.604815 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:40.604845 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:40.675096 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:40.675150 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:43.193550 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:43.204374 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:43.204448 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:43.239931 1986432 cri.go:89] found id: ""
	I1124 10:44:43.239957 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.239966 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:43.239972 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:43.240033 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:43.271665 1986432 cri.go:89] found id: ""
	I1124 10:44:43.271688 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.271696 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:43.271721 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:43.271803 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:43.302895 1986432 cri.go:89] found id: ""
	I1124 10:44:43.302918 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.302927 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:43.302933 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:43.302990 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:43.330344 1986432 cri.go:89] found id: ""
	I1124 10:44:43.330367 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.330376 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:43.330382 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:43.330443 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:43.356829 1986432 cri.go:89] found id: ""
	I1124 10:44:43.356916 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.356939 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:43.356961 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:43.357096 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:43.385286 1986432 cri.go:89] found id: ""
	I1124 10:44:43.385309 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.385317 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:43.385328 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:43.385387 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:43.412875 1986432 cri.go:89] found id: ""
	I1124 10:44:43.412897 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.412906 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:43.412923 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:43.412989 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:43.440574 1986432 cri.go:89] found id: ""
	I1124 10:44:43.440599 1986432 logs.go:282] 0 containers: []
	W1124 10:44:43.440607 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:43.440615 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:43.440627 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:43.481520 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:43.481556 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:43.513935 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:43.513964 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:43.587788 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:43.587827 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:43.605648 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:43.605682 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:43.675376 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:46.177068 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:46.188514 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:46.188630 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:46.231484 1986432 cri.go:89] found id: ""
	I1124 10:44:46.231582 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.231616 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:46.231651 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:46.231759 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:46.265322 1986432 cri.go:89] found id: ""
	I1124 10:44:46.265400 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.265424 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:46.265445 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:46.265543 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:46.309585 1986432 cri.go:89] found id: ""
	I1124 10:44:46.309653 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.309671 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:46.309679 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:46.309758 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:46.337028 1986432 cri.go:89] found id: ""
	I1124 10:44:46.337052 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.337061 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:46.337068 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:46.337191 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:46.364039 1986432 cri.go:89] found id: ""
	I1124 10:44:46.364100 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.364110 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:46.364122 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:46.364196 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:46.396240 1986432 cri.go:89] found id: ""
	I1124 10:44:46.396304 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.396319 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:46.396328 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:46.396397 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:46.423774 1986432 cri.go:89] found id: ""
	I1124 10:44:46.423798 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.423807 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:46.423814 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:46.423873 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:46.452092 1986432 cri.go:89] found id: ""
	I1124 10:44:46.452116 1986432 logs.go:282] 0 containers: []
	W1124 10:44:46.452125 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:46.452134 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:46.452146 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:46.522454 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:46.522498 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:46.541744 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:46.541773 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:46.610050 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:46.610073 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:46.610086 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:46.650544 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:46.650581 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:49.182390 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:49.193004 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:49.193079 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:49.223961 1986432 cri.go:89] found id: ""
	I1124 10:44:49.224039 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.224071 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:49.224101 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:49.224185 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:49.254251 1986432 cri.go:89] found id: ""
	I1124 10:44:49.254326 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.254350 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:49.254371 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:49.254467 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:49.284000 1986432 cri.go:89] found id: ""
	I1124 10:44:49.284027 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.284050 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:49.284057 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:49.284201 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:49.312535 1986432 cri.go:89] found id: ""
	I1124 10:44:49.312558 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.312567 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:49.312574 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:49.312633 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:49.341329 1986432 cri.go:89] found id: ""
	I1124 10:44:49.341352 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.341361 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:49.341372 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:49.341436 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:49.371074 1986432 cri.go:89] found id: ""
	I1124 10:44:49.371097 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.371106 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:49.371113 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:49.371174 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:49.397418 1986432 cri.go:89] found id: ""
	I1124 10:44:49.397443 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.397452 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:49.397459 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:49.397540 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:49.424245 1986432 cri.go:89] found id: ""
	I1124 10:44:49.424283 1986432 logs.go:282] 0 containers: []
	W1124 10:44:49.424294 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:49.424304 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:49.424320 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:49.497085 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:49.497158 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:49.515738 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:49.515912 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:49.594332 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:49.594352 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:49.594377 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:49.637200 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:49.637238 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:52.171419 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:52.181967 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:52.182068 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:52.219715 1986432 cri.go:89] found id: ""
	I1124 10:44:52.219749 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.219759 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:52.219773 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:52.219854 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:52.260956 1986432 cri.go:89] found id: ""
	I1124 10:44:52.260992 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.261001 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:52.261007 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:52.261083 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:52.292349 1986432 cri.go:89] found id: ""
	I1124 10:44:52.292385 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.292395 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:52.292402 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:52.292484 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:52.322522 1986432 cri.go:89] found id: ""
	I1124 10:44:52.322589 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.322613 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:52.322634 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:52.322719 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:52.350204 1986432 cri.go:89] found id: ""
	I1124 10:44:52.350273 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.350296 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:52.350322 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:52.350413 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:52.376855 1986432 cri.go:89] found id: ""
	I1124 10:44:52.376881 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.376890 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:52.376908 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:52.376971 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:52.407902 1986432 cri.go:89] found id: ""
	I1124 10:44:52.407985 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.408003 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:52.408010 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:52.408094 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:52.437625 1986432 cri.go:89] found id: ""
	I1124 10:44:52.437651 1986432 logs.go:282] 0 containers: []
	W1124 10:44:52.437668 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:52.437679 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:52.437691 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:52.499644 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:52.499672 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:52.499685 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:52.543847 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:52.543885 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:52.574045 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:52.574075 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:52.645399 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:52.645439 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:55.168032 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:55.180272 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:55.180375 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:55.217283 1986432 cri.go:89] found id: ""
	I1124 10:44:55.217308 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.217317 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:55.217324 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:55.217387 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:55.249577 1986432 cri.go:89] found id: ""
	I1124 10:44:55.249603 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.249612 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:55.249619 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:55.249677 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:55.281901 1986432 cri.go:89] found id: ""
	I1124 10:44:55.281924 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.281933 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:55.281940 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:55.282002 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:55.308910 1986432 cri.go:89] found id: ""
	I1124 10:44:55.308936 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.308945 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:55.308953 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:55.309010 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:55.334038 1986432 cri.go:89] found id: ""
	I1124 10:44:55.334104 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.334127 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:55.334135 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:55.334193 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:55.361618 1986432 cri.go:89] found id: ""
	I1124 10:44:55.361696 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.361713 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:55.361722 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:55.361782 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:55.403452 1986432 cri.go:89] found id: ""
	I1124 10:44:55.403488 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.403497 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:55.403504 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:55.403580 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:55.436553 1986432 cri.go:89] found id: ""
	I1124 10:44:55.436589 1986432 logs.go:282] 0 containers: []
	W1124 10:44:55.436598 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:55.436607 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:55.436621 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:55.507171 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:55.507209 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:55.524823 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:55.524900 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:55.593162 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:55.593186 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:55.593203 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:55.634378 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:55.634416 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:44:58.167639 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:44:58.178209 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:44:58.178277 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:44:58.204438 1986432 cri.go:89] found id: ""
	I1124 10:44:58.204463 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.204484 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:44:58.204491 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:44:58.204564 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:44:58.237079 1986432 cri.go:89] found id: ""
	I1124 10:44:58.237159 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.237169 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:44:58.237176 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:44:58.237249 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:44:58.280585 1986432 cri.go:89] found id: ""
	I1124 10:44:58.280611 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.280631 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:44:58.280638 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:44:58.280715 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:44:58.308267 1986432 cri.go:89] found id: ""
	I1124 10:44:58.308293 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.308302 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:44:58.308309 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:44:58.308377 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:44:58.336715 1986432 cri.go:89] found id: ""
	I1124 10:44:58.336745 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.336754 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:44:58.336762 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:44:58.336824 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:44:58.364878 1986432 cri.go:89] found id: ""
	I1124 10:44:58.364952 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.364974 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:44:58.364994 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:44:58.365081 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:44:58.392651 1986432 cri.go:89] found id: ""
	I1124 10:44:58.392726 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.392752 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:44:58.392773 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:44:58.392894 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:44:58.420918 1986432 cri.go:89] found id: ""
	I1124 10:44:58.420992 1986432 logs.go:282] 0 containers: []
	W1124 10:44:58.421017 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:44:58.421041 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:44:58.421082 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:44:58.490519 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:44:58.490555 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:44:58.508378 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:44:58.508408 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:44:58.576140 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:44:58.576160 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:44:58.576174 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:44:58.616821 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:44:58.616855 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:01.150765 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:01.173267 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:01.173343 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:01.211217 1986432 cri.go:89] found id: ""
	I1124 10:45:01.211249 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.211259 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:01.211267 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:01.211356 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:01.241802 1986432 cri.go:89] found id: ""
	I1124 10:45:01.241828 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.241838 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:01.241846 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:01.241914 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:01.272739 1986432 cri.go:89] found id: ""
	I1124 10:45:01.272767 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.272776 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:01.272783 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:01.272853 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:01.303341 1986432 cri.go:89] found id: ""
	I1124 10:45:01.303372 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.303382 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:01.303390 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:01.303468 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:01.334475 1986432 cri.go:89] found id: ""
	I1124 10:45:01.334512 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.334522 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:01.334530 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:01.334607 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:01.365495 1986432 cri.go:89] found id: ""
	I1124 10:45:01.365527 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.365536 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:01.365545 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:01.365619 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:01.400556 1986432 cri.go:89] found id: ""
	I1124 10:45:01.400582 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.400593 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:01.400600 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:01.400666 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:01.433392 1986432 cri.go:89] found id: ""
	I1124 10:45:01.433415 1986432 logs.go:282] 0 containers: []
	W1124 10:45:01.433424 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:01.433449 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:01.433462 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:01.527591 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:01.529206 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:01.548809 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:01.548842 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:01.619211 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:01.619238 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:01.619254 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:01.669840 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:01.669895 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:04.206020 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:04.216629 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:04.216703 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:04.245185 1986432 cri.go:89] found id: ""
	I1124 10:45:04.245214 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.245224 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:04.245232 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:04.245294 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:04.277691 1986432 cri.go:89] found id: ""
	I1124 10:45:04.277716 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.277725 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:04.277733 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:04.277795 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:04.306945 1986432 cri.go:89] found id: ""
	I1124 10:45:04.306969 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.306978 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:04.306985 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:04.307072 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:04.334275 1986432 cri.go:89] found id: ""
	I1124 10:45:04.334306 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.334321 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:04.334329 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:04.334390 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:04.362992 1986432 cri.go:89] found id: ""
	I1124 10:45:04.363014 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.363022 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:04.363029 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:04.363088 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:04.393433 1986432 cri.go:89] found id: ""
	I1124 10:45:04.393459 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.393467 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:04.393473 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:04.393539 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:04.423707 1986432 cri.go:89] found id: ""
	I1124 10:45:04.423740 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.423750 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:04.423757 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:04.423824 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:04.451697 1986432 cri.go:89] found id: ""
	I1124 10:45:04.451775 1986432 logs.go:282] 0 containers: []
	W1124 10:45:04.451810 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:04.451854 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:04.451887 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:04.491861 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:04.491891 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:04.569948 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:04.569995 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:04.590025 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:04.590062 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:04.663032 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:04.663054 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:04.663067 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:07.205598 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:07.216231 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:07.216303 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:07.242382 1986432 cri.go:89] found id: ""
	I1124 10:45:07.242405 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.242414 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:07.242421 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:07.242481 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:07.269052 1986432 cri.go:89] found id: ""
	I1124 10:45:07.269075 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.269084 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:07.269091 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:07.269191 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:07.295394 1986432 cri.go:89] found id: ""
	I1124 10:45:07.295416 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.295424 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:07.295441 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:07.295500 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:07.322167 1986432 cri.go:89] found id: ""
	I1124 10:45:07.322250 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.322262 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:07.322270 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:07.322344 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:07.353784 1986432 cri.go:89] found id: ""
	I1124 10:45:07.353806 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.353839 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:07.353846 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:07.353923 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:07.384085 1986432 cri.go:89] found id: ""
	I1124 10:45:07.384111 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.384120 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:07.384128 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:07.384195 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:07.414300 1986432 cri.go:89] found id: ""
	I1124 10:45:07.414325 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.414334 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:07.414340 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:07.414398 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:07.442069 1986432 cri.go:89] found id: ""
	I1124 10:45:07.442097 1986432 logs.go:282] 0 containers: []
	W1124 10:45:07.442108 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:07.442117 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:07.442134 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:07.526688 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:07.526727 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:07.549619 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:07.549651 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:07.624059 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:07.624080 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:07.624099 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:07.666093 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:07.666130 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:10.202370 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:10.213154 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:10.213232 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:10.242526 1986432 cri.go:89] found id: ""
	I1124 10:45:10.242549 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.242558 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:10.242565 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:10.242632 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:10.275585 1986432 cri.go:89] found id: ""
	I1124 10:45:10.275611 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.275619 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:10.275626 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:10.275729 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:10.304857 1986432 cri.go:89] found id: ""
	I1124 10:45:10.304883 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.304892 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:10.304898 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:10.304963 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:10.336316 1986432 cri.go:89] found id: ""
	I1124 10:45:10.336341 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.336350 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:10.336358 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:10.336416 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:10.364130 1986432 cri.go:89] found id: ""
	I1124 10:45:10.364157 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.364175 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:10.364182 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:10.364244 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:10.392194 1986432 cri.go:89] found id: ""
	I1124 10:45:10.392231 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.392241 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:10.392248 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:10.392328 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:10.419901 1986432 cri.go:89] found id: ""
	I1124 10:45:10.419928 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.419937 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:10.419946 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:10.420007 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:10.449055 1986432 cri.go:89] found id: ""
	I1124 10:45:10.449090 1986432 logs.go:282] 0 containers: []
	W1124 10:45:10.449128 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:10.449149 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:10.449165 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:10.528928 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:10.529011 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:10.548337 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:10.548364 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:10.624953 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:10.625022 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:10.625062 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:10.668031 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:10.668066 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:13.204907 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:13.215185 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:13.215298 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:13.242062 1986432 cri.go:89] found id: ""
	I1124 10:45:13.242085 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.242095 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:13.242102 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:13.242167 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:13.267525 1986432 cri.go:89] found id: ""
	I1124 10:45:13.267549 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.267558 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:13.267564 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:13.267619 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:13.298858 1986432 cri.go:89] found id: ""
	I1124 10:45:13.298936 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.298953 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:13.298961 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:13.299035 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:13.325725 1986432 cri.go:89] found id: ""
	I1124 10:45:13.325749 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.325757 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:13.325764 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:13.325825 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:13.351200 1986432 cri.go:89] found id: ""
	I1124 10:45:13.351221 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.351242 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:13.351249 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:13.351319 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:13.383025 1986432 cri.go:89] found id: ""
	I1124 10:45:13.383048 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.383057 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:13.383064 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:13.383126 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:13.410162 1986432 cri.go:89] found id: ""
	I1124 10:45:13.410186 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.410195 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:13.410202 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:13.410263 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:13.437770 1986432 cri.go:89] found id: ""
	I1124 10:45:13.437795 1986432 logs.go:282] 0 containers: []
	W1124 10:45:13.437804 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:13.437815 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:13.437828 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:13.488275 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:13.488316 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:13.530324 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:13.530351 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:13.604337 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:13.604374 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:13.621388 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:13.621421 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:13.701680 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:16.202923 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:16.213306 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:16.213372 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:16.241364 1986432 cri.go:89] found id: ""
	I1124 10:45:16.241387 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.241396 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:16.241403 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:16.241469 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:16.267705 1986432 cri.go:89] found id: ""
	I1124 10:45:16.267727 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.267736 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:16.267742 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:16.267800 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:16.294080 1986432 cri.go:89] found id: ""
	I1124 10:45:16.294102 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.294110 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:16.294116 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:16.294174 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:16.319195 1986432 cri.go:89] found id: ""
	I1124 10:45:16.319217 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.319225 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:16.319233 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:16.319294 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:16.346888 1986432 cri.go:89] found id: ""
	I1124 10:45:16.346911 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.346920 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:16.346927 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:16.346990 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:16.373014 1986432 cri.go:89] found id: ""
	I1124 10:45:16.373038 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.373047 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:16.373054 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:16.373141 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:16.400736 1986432 cri.go:89] found id: ""
	I1124 10:45:16.400807 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.400831 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:16.400852 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:16.400940 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:16.430173 1986432 cri.go:89] found id: ""
	I1124 10:45:16.430249 1986432 logs.go:282] 0 containers: []
	W1124 10:45:16.430273 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:16.430295 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:16.430334 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:16.472954 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:16.472983 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:16.549840 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:16.549876 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:16.567003 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:16.567034 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:16.636079 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:16.636108 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:16.636123 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:19.178462 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:19.188993 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:19.189059 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:19.230211 1986432 cri.go:89] found id: ""
	I1124 10:45:19.230233 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.230242 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:19.230249 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:19.230308 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:19.258132 1986432 cri.go:89] found id: ""
	I1124 10:45:19.258155 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.258176 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:19.258185 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:19.258246 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:19.285507 1986432 cri.go:89] found id: ""
	I1124 10:45:19.285531 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.285539 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:19.285547 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:19.285614 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:19.313743 1986432 cri.go:89] found id: ""
	I1124 10:45:19.313765 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.313774 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:19.313781 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:19.313841 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:19.341218 1986432 cri.go:89] found id: ""
	I1124 10:45:19.341248 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.341257 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:19.341265 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:19.341325 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:19.367950 1986432 cri.go:89] found id: ""
	I1124 10:45:19.367976 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.367985 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:19.367992 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:19.368053 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:19.394601 1986432 cri.go:89] found id: ""
	I1124 10:45:19.394627 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.394637 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:19.394644 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:19.394708 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:19.434228 1986432 cri.go:89] found id: ""
	I1124 10:45:19.434251 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.434260 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:19.434268 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:19.434280 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:19.493533 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:19.493559 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:19.577029 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:19.577071 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:19.595336 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:19.595370 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:19.665756 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:19.665779 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:19.665792 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:22.216483 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:22.235923 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:22.235992 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:22.275278 1986432 cri.go:89] found id: ""
	I1124 10:45:22.275317 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.275330 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:22.275349 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:22.275425 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:22.321491 1986432 cri.go:89] found id: ""
	I1124 10:45:22.321518 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.321529 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:22.321536 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:22.321612 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:22.363465 1986432 cri.go:89] found id: ""
	I1124 10:45:22.363490 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.363499 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:22.363506 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:22.363568 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:22.399782 1986432 cri.go:89] found id: ""
	I1124 10:45:22.399808 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.399818 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:22.399825 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:22.399885 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:22.457990 1986432 cri.go:89] found id: ""
	I1124 10:45:22.458017 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.458025 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:22.458032 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:22.458092 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:22.517733 1986432 cri.go:89] found id: ""
	I1124 10:45:22.517759 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.517768 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:22.517775 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:22.517837 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:22.560877 1986432 cri.go:89] found id: ""
	I1124 10:45:22.560902 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.560911 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:22.560917 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:22.560974 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:22.592310 1986432 cri.go:89] found id: ""
	I1124 10:45:22.592344 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.592353 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:22.592362 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:22.592373 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:22.676382 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:22.676457 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:22.699655 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:22.699681 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:22.784335 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:22.784353 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:22.784365 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:22.840500 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:22.840577 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:25.377619 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:25.388294 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:25.388365 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:25.414360 1986432 cri.go:89] found id: ""
	I1124 10:45:25.414381 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.414390 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:25.414397 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:25.414454 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:25.441377 1986432 cri.go:89] found id: ""
	I1124 10:45:25.441403 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.441413 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:25.441420 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:25.441488 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:25.479479 1986432 cri.go:89] found id: ""
	I1124 10:45:25.479507 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.479516 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:25.479523 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:25.479581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:25.510312 1986432 cri.go:89] found id: ""
	I1124 10:45:25.510341 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.510349 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:25.510357 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:25.510416 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:25.541150 1986432 cri.go:89] found id: ""
	I1124 10:45:25.541175 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.541185 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:25.541192 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:25.541251 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:25.571686 1986432 cri.go:89] found id: ""
	I1124 10:45:25.571714 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.571723 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:25.571730 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:25.571790 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:25.598880 1986432 cri.go:89] found id: ""
	I1124 10:45:25.598901 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.598910 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:25.598917 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:25.598974 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:25.626005 1986432 cri.go:89] found id: ""
	I1124 10:45:25.626027 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.626036 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:25.626045 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:25.626056 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:25.666222 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:25.666258 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:25.703262 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:25.703294 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:25.778362 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:25.778400 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:25.796483 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:25.796514 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:25.866710 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:28.366962 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:28.385684 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:28.385762 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:28.415652 1986432 cri.go:89] found id: ""
	I1124 10:45:28.415677 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.415687 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:28.415693 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:28.415759 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:28.452408 1986432 cri.go:89] found id: ""
	I1124 10:45:28.452431 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.452440 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:28.452447 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:28.452503 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:28.515827 1986432 cri.go:89] found id: ""
	I1124 10:45:28.515849 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.515857 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:28.515864 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:28.515922 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:28.549890 1986432 cri.go:89] found id: ""
	I1124 10:45:28.549918 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.549927 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:28.549934 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:28.549994 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:28.584109 1986432 cri.go:89] found id: ""
	I1124 10:45:28.584131 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.584139 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:28.584146 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:28.584207 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:28.624082 1986432 cri.go:89] found id: ""
	I1124 10:45:28.624104 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.624113 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:28.624120 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:28.624178 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:28.659882 1986432 cri.go:89] found id: ""
	I1124 10:45:28.659904 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.659913 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:28.659920 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:28.659980 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:28.700857 1986432 cri.go:89] found id: ""
	I1124 10:45:28.700879 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.700889 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:28.700898 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:28.700910 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:28.775188 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:28.775272 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:28.795604 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:28.795630 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:28.888919 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:28.888937 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:28.888949 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:28.945578 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:28.945658 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:31.477317 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:31.488832 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:31.488906 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:31.526049 1986432 cri.go:89] found id: ""
	I1124 10:45:31.526077 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.526087 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:31.526094 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:31.526152 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:31.552114 1986432 cri.go:89] found id: ""
	I1124 10:45:31.552139 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.552148 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:31.552154 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:31.552215 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:31.590559 1986432 cri.go:89] found id: ""
	I1124 10:45:31.590586 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.590596 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:31.590603 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:31.590663 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:31.623429 1986432 cri.go:89] found id: ""
	I1124 10:45:31.623456 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.623466 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:31.623473 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:31.623535 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:31.657007 1986432 cri.go:89] found id: ""
	I1124 10:45:31.657031 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.657040 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:31.657047 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:31.657135 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:31.685936 1986432 cri.go:89] found id: ""
	I1124 10:45:31.685961 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.685970 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:31.685977 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:31.686036 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:31.724393 1986432 cri.go:89] found id: ""
	I1124 10:45:31.724420 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.724429 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:31.724436 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:31.724493 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:31.760530 1986432 cri.go:89] found id: ""
	I1124 10:45:31.766988 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.767073 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:31.767880 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:31.767896 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:31.811494 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:31.811526 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:31.903608 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:31.903649 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:31.921908 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:31.921948 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:31.999244 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:31.999279 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:31.999293 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:34.558990 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:34.570638 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:34.570725 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:34.608415 1986432 cri.go:89] found id: ""
	I1124 10:45:34.608444 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.608454 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:34.608460 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:34.608522 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:34.655752 1986432 cri.go:89] found id: ""
	I1124 10:45:34.655780 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.655789 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:34.655796 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:34.655866 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:34.707801 1986432 cri.go:89] found id: ""
	I1124 10:45:34.707829 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.707838 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:34.707845 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:34.707903 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:34.755972 1986432 cri.go:89] found id: ""
	I1124 10:45:34.755998 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.756009 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:34.756027 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:34.756101 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:34.803767 1986432 cri.go:89] found id: ""
	I1124 10:45:34.803796 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.803804 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:34.803812 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:34.803881 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:34.851006 1986432 cri.go:89] found id: ""
	I1124 10:45:34.851034 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.851043 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:34.851052 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:34.851115 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:34.904329 1986432 cri.go:89] found id: ""
	I1124 10:45:34.904358 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.904367 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:34.904374 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:34.904435 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:34.951467 1986432 cri.go:89] found id: ""
	I1124 10:45:34.951494 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.951503 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:34.951512 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:34.951524 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:35.042914 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:35.042954 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:35.063633 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:35.063669 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:35.167271 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:35.167295 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:35.167310 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:35.253298 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:35.253341 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:37.835094 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:37.848220 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:37.848315 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:37.888724 1986432 cri.go:89] found id: ""
	I1124 10:45:37.888756 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.888766 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:37.888773 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:37.888836 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:37.924370 1986432 cri.go:89] found id: ""
	I1124 10:45:37.924396 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.924405 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:37.924412 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:37.924477 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:37.974590 1986432 cri.go:89] found id: ""
	I1124 10:45:37.974626 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.974636 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:37.974643 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:37.974723 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:38.015170 1986432 cri.go:89] found id: ""
	I1124 10:45:38.015209 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.015220 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:38.015236 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:38.015330 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:38.074440 1986432 cri.go:89] found id: ""
	I1124 10:45:38.074499 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.074512 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:38.074533 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:38.074618 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:38.130684 1986432 cri.go:89] found id: ""
	I1124 10:45:38.130720 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.130731 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:38.130738 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:38.130812 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:38.170942 1986432 cri.go:89] found id: ""
	I1124 10:45:38.170983 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.170994 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:38.171001 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:38.171073 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:38.213467 1986432 cri.go:89] found id: ""
	I1124 10:45:38.213508 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.213518 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:38.213533 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:38.213563 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:38.363953 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:38.363977 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:38.363989 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:38.405041 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:38.405077 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:38.435192 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:38.435222 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:38.509425 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:38.509466 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:41.028226 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:41.038492 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:41.038559 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:41.066358 1986432 cri.go:89] found id: ""
	I1124 10:45:41.066381 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.066390 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:41.066397 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:41.066455 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:41.095869 1986432 cri.go:89] found id: ""
	I1124 10:45:41.095892 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.095901 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:41.095908 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:41.095965 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:41.124298 1986432 cri.go:89] found id: ""
	I1124 10:45:41.124321 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.124330 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:41.124336 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:41.124394 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:41.150773 1986432 cri.go:89] found id: ""
	I1124 10:45:41.150799 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.150807 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:41.150815 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:41.150876 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:41.177034 1986432 cri.go:89] found id: ""
	I1124 10:45:41.177057 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.177066 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:41.177072 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:41.177190 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:41.214595 1986432 cri.go:89] found id: ""
	I1124 10:45:41.214626 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.214635 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:41.214642 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:41.214700 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:41.246230 1986432 cri.go:89] found id: ""
	I1124 10:45:41.246255 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.246264 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:41.246271 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:41.246338 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:41.283456 1986432 cri.go:89] found id: ""
	I1124 10:45:41.283481 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.283490 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:41.283499 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:41.283511 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:41.358438 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:41.358475 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:41.379384 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:41.379415 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:41.445291 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:41.445364 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:41.445407 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:41.488247 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:41.488291 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:44.023834 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:44.034548 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:44.034620 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:44.064681 1986432 cri.go:89] found id: ""
	I1124 10:45:44.064705 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.064714 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:44.064721 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:44.064781 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:44.100184 1986432 cri.go:89] found id: ""
	I1124 10:45:44.100207 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.100217 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:44.100224 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:44.100281 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:44.142287 1986432 cri.go:89] found id: ""
	I1124 10:45:44.142314 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.142327 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:44.142334 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:44.142393 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:44.181305 1986432 cri.go:89] found id: ""
	I1124 10:45:44.181333 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.181342 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:44.181349 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:44.181430 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:44.218457 1986432 cri.go:89] found id: ""
	I1124 10:45:44.218483 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.218502 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:44.218509 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:44.218581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:44.251490 1986432 cri.go:89] found id: ""
	I1124 10:45:44.251517 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.251526 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:44.251532 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:44.251596 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:44.295854 1986432 cri.go:89] found id: ""
	I1124 10:45:44.295881 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.295890 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:44.295897 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:44.295962 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:44.326463 1986432 cri.go:89] found id: ""
	I1124 10:45:44.326484 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.326492 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:44.326501 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:44.326513 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:44.413633 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:44.413657 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:44.413670 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:44.455548 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:44.455583 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:44.484839 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:44.484875 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:44.559243 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:44.559282 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:47.077797 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:47.088211 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:47.088277 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:47.118728 1986432 cri.go:89] found id: ""
	I1124 10:45:47.118750 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.118760 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:47.118767 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:47.118825 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:47.150483 1986432 cri.go:89] found id: ""
	I1124 10:45:47.150507 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.150516 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:47.150523 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:47.150581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:47.181735 1986432 cri.go:89] found id: ""
	I1124 10:45:47.181758 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.181767 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:47.181774 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:47.181833 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:47.214333 1986432 cri.go:89] found id: ""
	I1124 10:45:47.214356 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.214365 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:47.214371 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:47.214432 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:47.244171 1986432 cri.go:89] found id: ""
	I1124 10:45:47.244250 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.244273 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:47.244296 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:47.244395 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:47.275473 1986432 cri.go:89] found id: ""
	I1124 10:45:47.275494 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.275503 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:47.275510 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:47.275568 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:47.309070 1986432 cri.go:89] found id: ""
	I1124 10:45:47.309092 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.309181 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:47.309191 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:47.309250 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:47.336153 1986432 cri.go:89] found id: ""
	I1124 10:45:47.336174 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.336183 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:47.336193 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:47.336204 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:47.406220 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:47.406257 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:47.424789 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:47.424817 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:47.493299 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:47.493323 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:47.493339 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:47.537076 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:47.537117 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:50.067104 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:50.078283 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:50.078363 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:50.109047 1986432 cri.go:89] found id: ""
	I1124 10:45:50.109074 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.109083 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:50.109090 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:50.109175 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:50.137023 1986432 cri.go:89] found id: ""
	I1124 10:45:50.137046 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.137054 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:50.137060 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:50.137146 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:50.168293 1986432 cri.go:89] found id: ""
	I1124 10:45:50.168316 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.168333 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:50.168340 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:50.168402 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:50.196807 1986432 cri.go:89] found id: ""
	I1124 10:45:50.196831 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.196840 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:50.196847 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:50.196918 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:50.235910 1986432 cri.go:89] found id: ""
	I1124 10:45:50.235932 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.235941 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:50.235947 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:50.236012 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:50.272648 1986432 cri.go:89] found id: ""
	I1124 10:45:50.272671 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.272681 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:50.272688 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:50.272750 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:50.306519 1986432 cri.go:89] found id: ""
	I1124 10:45:50.306542 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.306550 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:50.306556 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:50.306621 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:50.337670 1986432 cri.go:89] found id: ""
	I1124 10:45:50.337692 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.337700 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:50.337710 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:50.337721 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:50.408914 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:50.408955 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:50.427976 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:50.428169 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:50.503359 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:50.503430 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:50.503458 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:50.544309 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:50.544354 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:53.085257 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:53.095705 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:53.095776 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:53.121867 1986432 cri.go:89] found id: ""
	I1124 10:45:53.121891 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.121900 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:53.121912 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:53.121975 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:53.149089 1986432 cri.go:89] found id: ""
	I1124 10:45:53.149151 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.149160 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:53.149167 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:53.149236 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:53.175431 1986432 cri.go:89] found id: ""
	I1124 10:45:53.175454 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.175462 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:53.175470 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:53.175528 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:53.203073 1986432 cri.go:89] found id: ""
	I1124 10:45:53.203100 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.203110 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:53.203117 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:53.203175 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:53.245799 1986432 cri.go:89] found id: ""
	I1124 10:45:53.245825 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.245833 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:53.245840 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:53.245906 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:53.276054 1986432 cri.go:89] found id: ""
	I1124 10:45:53.276075 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.276084 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:53.276090 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:53.276149 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:53.303453 1986432 cri.go:89] found id: ""
	I1124 10:45:53.303484 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.303493 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:53.303500 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:53.303556 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:53.335209 1986432 cri.go:89] found id: ""
	I1124 10:45:53.335231 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.335239 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:53.335248 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:53.335260 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:53.448690 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:53.448713 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:53.448726 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:53.500771 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:53.500811 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:53.541966 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:53.541996 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:53.627838 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:53.627878 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:56.145599 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:56.157574 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:56.157643 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:56.196385 1986432 cri.go:89] found id: ""
	I1124 10:45:56.196408 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.196422 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:56.196429 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:56.196489 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:56.252617 1986432 cri.go:89] found id: ""
	I1124 10:45:56.252640 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.252718 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:56.252730 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:56.252799 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:56.298566 1986432 cri.go:89] found id: ""
	I1124 10:45:56.298587 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.298595 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:56.298601 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:56.298658 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:56.346392 1986432 cri.go:89] found id: ""
	I1124 10:45:56.346417 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.346426 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:56.346433 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:56.346495 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:56.390111 1986432 cri.go:89] found id: ""
	I1124 10:45:56.390137 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.390145 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:56.390153 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:56.390218 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:56.423308 1986432 cri.go:89] found id: ""
	I1124 10:45:56.423336 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.423344 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:56.423351 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:56.423412 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:56.468963 1986432 cri.go:89] found id: ""
	I1124 10:45:56.468987 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.468995 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:56.469002 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:56.469061 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:56.510694 1986432 cri.go:89] found id: ""
	I1124 10:45:56.510719 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.510728 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:56.510737 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:56.510750 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:56.595485 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:56.595558 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:56.618105 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:56.618131 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:56.703129 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:56.703201 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:56.703228 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:56.754363 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:56.754402 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:59.303242 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:59.314356 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:59.314424 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:59.346229 1986432 cri.go:89] found id: ""
	I1124 10:45:59.346251 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.346260 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:59.346273 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:59.346333 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:59.386510 1986432 cri.go:89] found id: ""
	I1124 10:45:59.386531 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.386540 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:59.386546 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:59.386603 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:59.430604 1986432 cri.go:89] found id: ""
	I1124 10:45:59.430628 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.430638 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:59.430645 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:59.430705 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:59.478524 1986432 cri.go:89] found id: ""
	I1124 10:45:59.478546 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.478555 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:59.478562 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:59.478622 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:59.557683 1986432 cri.go:89] found id: ""
	I1124 10:45:59.557705 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.557713 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:59.557720 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:59.557790 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:59.593726 1986432 cri.go:89] found id: ""
	I1124 10:45:59.593748 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.593756 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:59.593763 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:59.593833 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:59.643641 1986432 cri.go:89] found id: ""
	I1124 10:45:59.643664 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.643673 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:59.643679 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:59.643737 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:59.707154 1986432 cri.go:89] found id: ""
	I1124 10:45:59.707175 1986432 logs.go:282] 0 containers: []
	W1124 10:45:59.707184 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:59.707193 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:59.707204 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:59.781513 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:59.781564 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:59.815601 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:59.815632 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:59.889911 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:59.889959 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:59.909131 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:59.909163 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:59.988155 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:46:02.488865 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:46:02.513662 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:46:02.513738 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:46:02.551717 1986432 cri.go:89] found id: ""
	I1124 10:46:02.551743 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.551752 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:46:02.551759 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:46:02.551820 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:46:02.590309 1986432 cri.go:89] found id: ""
	I1124 10:46:02.590330 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.590338 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:46:02.590344 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:46:02.590399 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:46:02.638351 1986432 cri.go:89] found id: ""
	I1124 10:46:02.638372 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.638381 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:46:02.638387 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:46:02.638455 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:46:02.682640 1986432 cri.go:89] found id: ""
	I1124 10:46:02.682662 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.682671 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:46:02.682678 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:46:02.682737 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:46:02.716824 1986432 cri.go:89] found id: ""
	I1124 10:46:02.716847 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.716855 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:46:02.716862 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:46:02.716921 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:46:02.757022 1986432 cri.go:89] found id: ""
	I1124 10:46:02.757045 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.757054 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:46:02.757061 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:46:02.757160 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:46:02.805923 1986432 cri.go:89] found id: ""
	I1124 10:46:02.805944 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.805953 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:46:02.805959 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:46:02.806020 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:46:02.847644 1986432 cri.go:89] found id: ""
	I1124 10:46:02.847664 1986432 logs.go:282] 0 containers: []
	W1124 10:46:02.847672 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:46:02.847681 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:46:02.847693 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:46:02.925577 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:46:02.925658 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:46:02.943246 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:46:02.943270 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:46:03.025268 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:46:03.025305 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:46:03.025335 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:46:03.072356 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:46:03.072396 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:46:05.615960 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:46:05.626892 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:46:05.626969 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:46:05.657041 1986432 cri.go:89] found id: ""
	I1124 10:46:05.657068 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.657083 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:46:05.657091 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:46:05.657176 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:46:05.685277 1986432 cri.go:89] found id: ""
	I1124 10:46:05.685308 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.685317 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:46:05.685323 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:46:05.685380 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:46:05.713818 1986432 cri.go:89] found id: ""
	I1124 10:46:05.713840 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.713849 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:46:05.713855 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:46:05.713914 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:46:05.742252 1986432 cri.go:89] found id: ""
	I1124 10:46:05.742279 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.742288 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:46:05.742295 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:46:05.742353 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:46:05.772273 1986432 cri.go:89] found id: ""
	I1124 10:46:05.772300 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.772310 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:46:05.772316 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:46:05.772378 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:46:05.798892 1986432 cri.go:89] found id: ""
	I1124 10:46:05.798913 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.798922 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:46:05.798929 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:46:05.798989 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:46:05.828051 1986432 cri.go:89] found id: ""
	I1124 10:46:05.828080 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.828088 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:46:05.828095 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:46:05.828154 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:46:05.859458 1986432 cri.go:89] found id: ""
	I1124 10:46:05.859486 1986432 logs.go:282] 0 containers: []
	W1124 10:46:05.859496 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:46:05.859505 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:46:05.859517 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:46:05.902193 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:46:05.902231 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:46:05.936486 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:46:05.936514 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:46:06.008444 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:46:06.008494 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:46:06.027222 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:46:06.027251 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:46:06.093445 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:46:08.594328 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:46:08.606759 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:46:08.606836 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:46:08.643130 1986432 cri.go:89] found id: ""
	I1124 10:46:08.643159 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.643168 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:46:08.643175 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:46:08.643231 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:46:08.693950 1986432 cri.go:89] found id: ""
	I1124 10:46:08.693979 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.693988 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:46:08.693995 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:46:08.694057 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:46:08.728061 1986432 cri.go:89] found id: ""
	I1124 10:46:08.728090 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.728098 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:46:08.728105 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:46:08.728161 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:46:08.768276 1986432 cri.go:89] found id: ""
	I1124 10:46:08.768303 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.768311 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:46:08.768318 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:46:08.768379 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:46:08.803615 1986432 cri.go:89] found id: ""
	I1124 10:46:08.803642 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.803651 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:46:08.803658 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:46:08.803714 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:46:08.840633 1986432 cri.go:89] found id: ""
	I1124 10:46:08.840669 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.840678 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:46:08.840686 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:46:08.840743 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:46:08.869883 1986432 cri.go:89] found id: ""
	I1124 10:46:08.869912 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.869921 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:46:08.869928 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:46:08.869987 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:46:08.903383 1986432 cri.go:89] found id: ""
	I1124 10:46:08.903410 1986432 logs.go:282] 0 containers: []
	W1124 10:46:08.903420 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:46:08.903428 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:46:08.903440 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:46:08.946632 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:46:08.946670 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:46:08.987117 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:46:08.987147 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:46:09.067632 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:46:09.067708 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:46:09.084777 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:46:09.084858 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:46:09.172263 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:46:11.673316 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:46:11.683670 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:46:11.683746 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:46:11.717009 1986432 cri.go:89] found id: ""
	I1124 10:46:11.717037 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.717045 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:46:11.717052 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:46:11.717150 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:46:11.745976 1986432 cri.go:89] found id: ""
	I1124 10:46:11.746004 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.746012 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:46:11.746019 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:46:11.746082 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:46:11.775960 1986432 cri.go:89] found id: ""
	I1124 10:46:11.775988 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.775997 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:46:11.776004 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:46:11.776060 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:46:11.805720 1986432 cri.go:89] found id: ""
	I1124 10:46:11.805748 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.805757 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:46:11.805764 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:46:11.805822 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:46:11.852041 1986432 cri.go:89] found id: ""
	I1124 10:46:11.852070 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.852079 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:46:11.852086 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:46:11.852143 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:46:11.889488 1986432 cri.go:89] found id: ""
	I1124 10:46:11.889510 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.889518 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:46:11.889524 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:46:11.889582 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:46:11.924583 1986432 cri.go:89] found id: ""
	I1124 10:46:11.924607 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.924616 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:46:11.924623 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:46:11.924679 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:46:11.958182 1986432 cri.go:89] found id: ""
	I1124 10:46:11.958211 1986432 logs.go:282] 0 containers: []
	W1124 10:46:11.958221 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:46:11.958230 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:46:11.958242 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:46:12.037167 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:46:12.037201 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:46:12.055819 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:46:12.055851 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:46:12.136751 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:46:12.136775 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:46:12.136787 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:46:12.188060 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:46:12.188110 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:46:14.792854 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:46:14.805071 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:46:14.805167 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:46:14.844746 1986432 cri.go:89] found id: ""
	I1124 10:46:14.844768 1986432 logs.go:282] 0 containers: []
	W1124 10:46:14.844778 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:46:14.844784 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:46:14.844842 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:46:14.875823 1986432 cri.go:89] found id: ""
	I1124 10:46:14.875846 1986432 logs.go:282] 0 containers: []
	W1124 10:46:14.875855 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:46:14.875862 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:46:14.875923 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:46:14.905602 1986432 cri.go:89] found id: ""
	I1124 10:46:14.905626 1986432 logs.go:282] 0 containers: []
	W1124 10:46:14.905635 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:46:14.905641 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:46:14.905700 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:46:14.941685 1986432 cri.go:89] found id: ""
	I1124 10:46:14.941706 1986432 logs.go:282] 0 containers: []
	W1124 10:46:14.941715 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:46:14.941722 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:46:14.941782 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:46:14.974608 1986432 cri.go:89] found id: ""
	I1124 10:46:14.974629 1986432 logs.go:282] 0 containers: []
	W1124 10:46:14.974637 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:46:14.974644 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:46:14.974708 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:46:15.015192 1986432 cri.go:89] found id: ""
	I1124 10:46:15.015217 1986432 logs.go:282] 0 containers: []
	W1124 10:46:15.015227 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:46:15.015235 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:46:15.015306 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:46:15.057583 1986432 cri.go:89] found id: ""
	I1124 10:46:15.057607 1986432 logs.go:282] 0 containers: []
	W1124 10:46:15.057617 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:46:15.057624 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:46:15.057725 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:46:15.106453 1986432 cri.go:89] found id: ""
	I1124 10:46:15.106480 1986432 logs.go:282] 0 containers: []
	W1124 10:46:15.106489 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:46:15.106498 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:46:15.106509 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:46:15.161725 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:46:15.161764 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:46:15.223122 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:46:15.223152 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:46:15.361082 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:46:15.361179 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:46:15.379813 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:46:15.379944 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:46:15.463805 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:46:17.964097 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:46:17.974647 1986432 kubeadm.go:602] duration metric: took 4m4.587151962s to restartPrimaryControlPlane
	W1124 10:46:17.974781 1986432 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1124 10:46:17.975045 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 10:46:18.411210 1986432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:46:18.428966 1986432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 10:46:18.441087 1986432 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:46:18.441276 1986432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:46:18.455684 1986432 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:46:18.455710 1986432 kubeadm.go:158] found existing configuration files:
	
	I1124 10:46:18.455769 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 10:46:18.467600 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:46:18.467682 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:46:18.478766 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 10:46:18.491279 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:46:18.491369 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:46:18.502199 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 10:46:18.514521 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:46:18.514604 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:46:18.525771 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 10:46:18.537173 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:46:18.537265 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:46:18.548296 1986432 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:46:18.607311 1986432 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 10:46:18.607728 1986432 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:46:18.724187 1986432 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:46:18.724296 1986432 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:46:18.724360 1986432 kubeadm.go:319] OS: Linux
	I1124 10:46:18.724424 1986432 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:46:18.724496 1986432 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:46:18.724579 1986432 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:46:18.724648 1986432 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:46:18.724722 1986432 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:46:18.724792 1986432 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:46:18.724867 1986432 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:46:18.724953 1986432 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:46:18.725020 1986432 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:46:18.809619 1986432 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:46:18.809751 1986432 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:46:18.809865 1986432 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 10:46:21.908627 1986432 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:46:21.912448 1986432 out.go:252]   - Generating certificates and keys ...
	I1124 10:46:21.912561 1986432 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:46:21.912661 1986432 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:46:21.912757 1986432 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 10:46:21.912841 1986432 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 10:46:21.913501 1986432 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 10:46:21.914205 1986432 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 10:46:21.914916 1986432 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 10:46:21.915595 1986432 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 10:46:21.916308 1986432 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 10:46:21.917019 1986432 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 10:46:21.917925 1986432 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 10:46:21.918254 1986432 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:46:22.075754 1986432 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:46:22.288376 1986432 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 10:46:23.038640 1986432 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:46:23.113975 1986432 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:46:23.582605 1986432 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:46:23.583985 1986432 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:46:23.587788 1986432 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:46:23.591322 1986432 out.go:252]   - Booting up control plane ...
	I1124 10:46:23.591458 1986432 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:46:23.591584 1986432 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:46:23.598323 1986432 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:46:23.624670 1986432 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:46:23.624786 1986432 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 10:46:23.634577 1986432 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 10:46:23.634678 1986432 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:46:23.634721 1986432 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:46:23.790813 1986432 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 10:46:23.790938 1986432 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:50:23.791428 1986432 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000956429s
	I1124 10:50:23.791463 1986432 kubeadm.go:319] 
	I1124 10:50:23.791521 1986432 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:50:23.791566 1986432 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:50:23.791671 1986432 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:50:23.791678 1986432 kubeadm.go:319] 
	I1124 10:50:23.791782 1986432 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:50:23.791814 1986432 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:50:23.791855 1986432 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:50:23.791860 1986432 kubeadm.go:319] 
	I1124 10:50:23.795961 1986432 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:50:23.796388 1986432 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:50:23.796530 1986432 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:50:23.796758 1986432 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:50:23.796765 1986432 kubeadm.go:319] 
	I1124 10:50:23.796830 1986432 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1124 10:50:23.796931 1986432 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000956429s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000956429s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1124 10:50:23.797006 1986432 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1124 10:50:24.216344 1986432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:50:24.230200 1986432 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:50:24.230322 1986432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:50:24.238919 1986432 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:50:24.238939 1986432 kubeadm.go:158] found existing configuration files:
	
	I1124 10:50:24.238992 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 10:50:24.247876 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:50:24.247946 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:50:24.256061 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 10:50:24.264653 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:50:24.264727 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:50:24.272626 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 10:50:24.281001 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:50:24.281070 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:50:24.289251 1986432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 10:50:24.297356 1986432 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:50:24.297434 1986432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:50:24.305797 1986432 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:50:24.345555 1986432 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 10:50:24.345894 1986432 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:50:24.418690 1986432 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:50:24.418764 1986432 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:50:24.418805 1986432 kubeadm.go:319] OS: Linux
	I1124 10:50:24.418853 1986432 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:50:24.418904 1986432 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:50:24.418953 1986432 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:50:24.419002 1986432 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:50:24.419052 1986432 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:50:24.419111 1986432 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:50:24.419158 1986432 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:50:24.419208 1986432 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:50:24.419256 1986432 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:50:24.490156 1986432 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:50:24.490271 1986432 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:50:24.490368 1986432 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 10:50:24.509703 1986432 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:50:24.513338 1986432 out.go:252]   - Generating certificates and keys ...
	I1124 10:50:24.513457 1986432 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:50:24.513544 1986432 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:50:24.513639 1986432 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1124 10:50:24.513725 1986432 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1124 10:50:24.513811 1986432 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1124 10:50:24.513880 1986432 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1124 10:50:24.513961 1986432 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1124 10:50:24.514040 1986432 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1124 10:50:24.514130 1986432 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1124 10:50:24.514210 1986432 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1124 10:50:24.514260 1986432 kubeadm.go:319] [certs] Using the existing "sa" key
	I1124 10:50:24.514323 1986432 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:50:24.779727 1986432 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:50:25.071694 1986432 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 10:50:25.853745 1986432 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:50:26.174150 1986432 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:50:26.595055 1986432 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:50:26.597364 1986432 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:50:26.600644 1986432 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:50:26.604031 1986432 out.go:252]   - Booting up control plane ...
	I1124 10:50:26.604139 1986432 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:50:26.604214 1986432 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:50:26.604291 1986432 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:50:26.618537 1986432 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:50:26.618667 1986432 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 10:50:26.633455 1986432 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 10:50:26.633573 1986432 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:50:26.633619 1986432 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:50:26.767606 1986432 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 10:50:26.767727 1986432 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 10:54:26.767390 1986432 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001032204s
	I1124 10:54:26.767428 1986432 kubeadm.go:319] 
	I1124 10:54:26.767483 1986432 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:54:26.767521 1986432 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:54:26.767623 1986432 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:54:26.767632 1986432 kubeadm.go:319] 
	I1124 10:54:26.767731 1986432 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:54:26.767765 1986432 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:54:26.767798 1986432 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:54:26.767805 1986432 kubeadm.go:319] 
	I1124 10:54:26.771960 1986432 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:54:26.772486 1986432 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:54:26.772609 1986432 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:54:26.772893 1986432 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:54:26.772931 1986432 kubeadm.go:319] 
	I1124 10:54:26.773067 1986432 kubeadm.go:403] duration metric: took 12m13.436885573s to StartCluster
	I1124 10:54:26.773197 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:54:26.773210 1986432 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 10:54:26.773259 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:54:26.802426 1986432 cri.go:89] found id: ""
	I1124 10:54:26.802452 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.802461 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:54:26.802469 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:54:26.802542 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:54:26.833419 1986432 cri.go:89] found id: ""
	I1124 10:54:26.833447 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.833457 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:54:26.833464 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:54:26.833527 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:54:26.861824 1986432 cri.go:89] found id: ""
	I1124 10:54:26.861854 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.861864 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:54:26.861871 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:54:26.861936 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:54:26.893971 1986432 cri.go:89] found id: ""
	I1124 10:54:26.893996 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.894006 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:54:26.894013 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:54:26.894089 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:54:26.922120 1986432 cri.go:89] found id: ""
	I1124 10:54:26.922145 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.922155 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:54:26.922161 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:54:26.922218 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:54:26.949491 1986432 cri.go:89] found id: ""
	I1124 10:54:26.949514 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.949523 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:54:26.949530 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:54:26.949590 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:54:26.975692 1986432 cri.go:89] found id: ""
	I1124 10:54:26.975718 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.975727 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:54:26.975733 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:54:26.975792 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:54:27.005263 1986432 cri.go:89] found id: ""
	I1124 10:54:27.005289 1986432 logs.go:282] 0 containers: []
	W1124 10:54:27.005299 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:54:27.005309 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:54:27.005322 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:54:27.080411 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:54:27.080449 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:54:27.098348 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:54:27.098377 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:54:27.168103 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:54:27.168127 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:54:27.168141 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:54:27.215650 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:54:27.215685 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:54:27.247499 1986432 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 10:54:27.247551 1986432 out.go:285] * 
	* 
	W1124 10:54:27.247629 1986432 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:54:27.247882 1986432 out.go:285] * 
	* 
	W1124 10:54:27.250129 1986432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:54:27.255665 1986432 out.go:203] 
	W1124 10:54:27.258526 1986432 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:54:27.258575 1986432 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 10:54:27.258596 1986432 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 10:54:27.261630 1986432 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-306449 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-306449 version --output=json: exit status 1 (143.625139ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.85.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-11-24 10:54:28.069599543 +0000 UTC m=+6120.649431856
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-306449
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-306449:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28afe8aa80a25a8af95608dab1745f3540d4d1762914100d1e25a8dfcb9f16b3",
	        "Created": "2025-11-24T10:41:16.469314105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1986566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T10:41:47.357160033Z",
	            "FinishedAt": "2025-11-24T10:41:46.216320142Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/28afe8aa80a25a8af95608dab1745f3540d4d1762914100d1e25a8dfcb9f16b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28afe8aa80a25a8af95608dab1745f3540d4d1762914100d1e25a8dfcb9f16b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/28afe8aa80a25a8af95608dab1745f3540d4d1762914100d1e25a8dfcb9f16b3/hosts",
	        "LogPath": "/var/lib/docker/containers/28afe8aa80a25a8af95608dab1745f3540d4d1762914100d1e25a8dfcb9f16b3/28afe8aa80a25a8af95608dab1745f3540d4d1762914100d1e25a8dfcb9f16b3-json.log",
	        "Name": "/kubernetes-upgrade-306449",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-306449:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-306449",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28afe8aa80a25a8af95608dab1745f3540d4d1762914100d1e25a8dfcb9f16b3",
	                "LowerDir": "/var/lib/docker/overlay2/25d68c10c3a0ee250de385376939f01c11befdceb6670eea141f6bab72e421b8-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25d68c10c3a0ee250de385376939f01c11befdceb6670eea141f6bab72e421b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25d68c10c3a0ee250de385376939f01c11befdceb6670eea141f6bab72e421b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25d68c10c3a0ee250de385376939f01c11befdceb6670eea141f6bab72e421b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-306449",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-306449/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-306449",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-306449",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-306449",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edfdfd367a4aa4bf0bca71b19f94861b1c920df02f3c578430e27f48d282fa16",
	            "SandboxKey": "/var/run/docker/netns/edfdfd367a4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35230"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35231"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35232"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35233"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-306449": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:52:7c:40:66:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c82978c02a215c0bc737e62cd3b6674aa8605ef001bc64ec93425c76140d7a4",
	                    "EndpointID": "6a4cc8394a95e6fa7386138f299f6b153f34e30fd1c27e95c33c09bc92d09e08",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-306449",
	                        "28afe8aa80a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-306449 -n kubernetes-upgrade-306449
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-306449 -n kubernetes-upgrade-306449: exit status 2 (352.154338ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-306449 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-484396 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo containerd config dump                                                                                                                                                                                                  │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ ssh     │ -p cilium-484396 sudo crio config                                                                                                                                                                                                             │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │                     │
	│ delete  │ -p cilium-484396                                                                                                                                                                                                                              │ cilium-484396            │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │ 24 Nov 25 10:46 UTC │
	│ start   │ -p force-systemd-env-478222 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-478222 │ jenkins │ v1.37.0 │ 24 Nov 25 10:46 UTC │ 24 Nov 25 10:47 UTC │
	│ delete  │ -p force-systemd-env-478222                                                                                                                                                                                                                   │ force-systemd-env-478222 │ jenkins │ v1.37.0 │ 24 Nov 25 10:47 UTC │ 24 Nov 25 10:47 UTC │
	│ start   │ -p cert-expiration-352809 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-352809   │ jenkins │ v1.37.0 │ 24 Nov 25 10:47 UTC │ 24 Nov 25 10:47 UTC │
	│ start   │ -p cert-expiration-352809 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-352809   │ jenkins │ v1.37.0 │ 24 Nov 25 10:50 UTC │ 24 Nov 25 10:52 UTC │
	│ delete  │ -p cert-expiration-352809                                                                                                                                                                                                                     │ cert-expiration-352809   │ jenkins │ v1.37.0 │ 24 Nov 25 10:52 UTC │ 24 Nov 25 10:52 UTC │
	│ start   │ -p cert-options-041668 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-041668      │ jenkins │ v1.37.0 │ 24 Nov 25 10:52 UTC │ 24 Nov 25 10:53 UTC │
	│ ssh     │ cert-options-041668 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-041668      │ jenkins │ v1.37.0 │ 24 Nov 25 10:53 UTC │ 24 Nov 25 10:53 UTC │
	│ ssh     │ -p cert-options-041668 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-041668      │ jenkins │ v1.37.0 │ 24 Nov 25 10:53 UTC │ 24 Nov 25 10:53 UTC │
	│ delete  │ -p cert-options-041668                                                                                                                                                                                                                        │ cert-options-041668      │ jenkins │ v1.37.0 │ 24 Nov 25 10:53 UTC │ 24 Nov 25 10:53 UTC │
	│ start   │ -p old-k8s-version-449797 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-449797   │ jenkins │ v1.37.0 │ 24 Nov 25 10:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 10:53:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 10:53:27.003405 2025822 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:53:27.003573 2025822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:53:27.003579 2025822 out.go:374] Setting ErrFile to fd 2...
	I1124 10:53:27.003585 2025822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:53:27.003902 2025822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:53:27.004395 2025822 out.go:368] Setting JSON to false
	I1124 10:53:27.005454 2025822 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":34557,"bootTime":1763947050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:53:27.005559 2025822 start.go:143] virtualization:  
	I1124 10:53:27.009402 2025822 out.go:179] * [old-k8s-version-449797] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 10:53:27.014028 2025822 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:53:27.014226 2025822 notify.go:221] Checking for updates...
	I1124 10:53:27.021156 2025822 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:53:27.024532 2025822 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:53:27.027799 2025822 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:53:27.030844 2025822 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:53:27.033952 2025822 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:53:27.037700 2025822 config.go:182] Loaded profile config "kubernetes-upgrade-306449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:53:27.037823 2025822 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:53:27.070391 2025822 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:53:27.070518 2025822 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:53:27.125769 2025822 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:53:27.116907632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:53:27.125882 2025822 docker.go:319] overlay module found
	I1124 10:53:27.129234 2025822 out.go:179] * Using the docker driver based on user configuration
	I1124 10:53:27.132208 2025822 start.go:309] selected driver: docker
	I1124 10:53:27.132228 2025822 start.go:927] validating driver "docker" against <nil>
	I1124 10:53:27.132241 2025822 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:53:27.132955 2025822 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:53:27.197419 2025822 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:53:27.186973712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:53:27.197577 2025822 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 10:53:27.197814 2025822 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 10:53:27.200915 2025822 out.go:179] * Using Docker driver with root privileges
	I1124 10:53:27.203922 2025822 cni.go:84] Creating CNI manager for ""
	I1124 10:53:27.204001 2025822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:53:27.204017 2025822 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 10:53:27.204097 2025822 start.go:353] cluster config:
	{Name:old-k8s-version-449797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-449797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:53:27.209325 2025822 out.go:179] * Starting "old-k8s-version-449797" primary control-plane node in "old-k8s-version-449797" cluster
	I1124 10:53:27.212408 2025822 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 10:53:27.215443 2025822 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 10:53:27.218323 2025822 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 10:53:27.218387 2025822 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 10:53:27.218408 2025822 cache.go:65] Caching tarball of preloaded images
	I1124 10:53:27.218419 2025822 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 10:53:27.218500 2025822 preload.go:238] Found /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 10:53:27.218511 2025822 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 10:53:27.218624 2025822 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/config.json ...
	I1124 10:53:27.218652 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/config.json: {Name:mk6755b0f9a44e5b6a2d543e87c1ecaf089f26a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:27.243006 2025822 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 10:53:27.243030 2025822 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 10:53:27.243052 2025822 cache.go:243] Successfully downloaded all kic artifacts
	I1124 10:53:27.243090 2025822 start.go:360] acquireMachinesLock for old-k8s-version-449797: {Name:mkf2174adec1d17940fc428c7ffaac1e9313731a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:53:27.243198 2025822 start.go:364] duration metric: took 87.608µs to acquireMachinesLock for "old-k8s-version-449797"
	I1124 10:53:27.243230 2025822 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-449797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-449797 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 10:53:27.243306 2025822 start.go:125] createHost starting for "" (driver="docker")
	I1124 10:53:27.246710 2025822 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 10:53:27.246973 2025822 start.go:159] libmachine.API.Create for "old-k8s-version-449797" (driver="docker")
	I1124 10:53:27.247010 2025822 client.go:173] LocalClient.Create starting
	I1124 10:53:27.247094 2025822 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem
	I1124 10:53:27.247141 2025822 main.go:143] libmachine: Decoding PEM data...
	I1124 10:53:27.247171 2025822 main.go:143] libmachine: Parsing certificate...
	I1124 10:53:27.247225 2025822 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem
	I1124 10:53:27.247249 2025822 main.go:143] libmachine: Decoding PEM data...
	I1124 10:53:27.247264 2025822 main.go:143] libmachine: Parsing certificate...
	I1124 10:53:27.247696 2025822 cli_runner.go:164] Run: docker network inspect old-k8s-version-449797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 10:53:27.263494 2025822 cli_runner.go:211] docker network inspect old-k8s-version-449797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 10:53:27.263588 2025822 network_create.go:284] running [docker network inspect old-k8s-version-449797] to gather additional debugging logs...
	I1124 10:53:27.263605 2025822 cli_runner.go:164] Run: docker network inspect old-k8s-version-449797
	W1124 10:53:27.278742 2025822 cli_runner.go:211] docker network inspect old-k8s-version-449797 returned with exit code 1
	I1124 10:53:27.278771 2025822 network_create.go:287] error running [docker network inspect old-k8s-version-449797]: docker network inspect old-k8s-version-449797: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-449797 not found
	I1124 10:53:27.278786 2025822 network_create.go:289] output of [docker network inspect old-k8s-version-449797]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-449797 not found
	
	** /stderr **
	I1124 10:53:27.278895 2025822 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 10:53:27.294871 2025822 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b39f8e694b2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:c3:8d:8c:34:1f} reservation:<nil>}
	I1124 10:53:27.295195 2025822 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2317d09e3adf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:b2:bc:c5:5c:19} reservation:<nil>}
	I1124 10:53:27.295502 2025822 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d7ec595b4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:b2:e2:3d:5c:63} reservation:<nil>}
	I1124 10:53:27.295951 2025822 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019eaa70}
	I1124 10:53:27.295973 2025822 network_create.go:124] attempt to create docker network old-k8s-version-449797 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 10:53:27.296028 2025822 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-449797 old-k8s-version-449797
	I1124 10:53:27.362459 2025822 network_create.go:108] docker network old-k8s-version-449797 192.168.76.0/24 created
	I1124 10:53:27.362495 2025822 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-449797" container
	I1124 10:53:27.362568 2025822 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 10:53:27.377855 2025822 cli_runner.go:164] Run: docker volume create old-k8s-version-449797 --label name.minikube.sigs.k8s.io=old-k8s-version-449797 --label created_by.minikube.sigs.k8s.io=true
	I1124 10:53:27.396234 2025822 oci.go:103] Successfully created a docker volume old-k8s-version-449797
	I1124 10:53:27.396319 2025822 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-449797-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-449797 --entrypoint /usr/bin/test -v old-k8s-version-449797:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 10:53:27.971581 2025822 oci.go:107] Successfully prepared a docker volume old-k8s-version-449797
	I1124 10:53:27.971647 2025822 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 10:53:27.971657 2025822 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 10:53:27.971732 2025822 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-449797:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 10:53:33.041378 2025822 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-449797:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.069592434s)
	I1124 10:53:33.041412 2025822 kic.go:203] duration metric: took 5.069751509s to extract preloaded images to volume ...
	W1124 10:53:33.041609 2025822 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 10:53:33.041770 2025822 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 10:53:33.092108 2025822 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-449797 --name old-k8s-version-449797 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-449797 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-449797 --network old-k8s-version-449797 --ip 192.168.76.2 --volume old-k8s-version-449797:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 10:53:33.363491 2025822 cli_runner.go:164] Run: docker container inspect old-k8s-version-449797 --format={{.State.Running}}
	I1124 10:53:33.385361 2025822 cli_runner.go:164] Run: docker container inspect old-k8s-version-449797 --format={{.State.Status}}
	I1124 10:53:33.408687 2025822 cli_runner.go:164] Run: docker exec old-k8s-version-449797 stat /var/lib/dpkg/alternatives/iptables
	I1124 10:53:33.458820 2025822 oci.go:144] the created container "old-k8s-version-449797" has a running status.
	I1124 10:53:33.458847 2025822 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa...
	I1124 10:53:33.550088 2025822 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 10:53:33.577446 2025822 cli_runner.go:164] Run: docker container inspect old-k8s-version-449797 --format={{.State.Status}}
	I1124 10:53:33.608546 2025822 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 10:53:33.608565 2025822 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-449797 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 10:53:33.670190 2025822 cli_runner.go:164] Run: docker container inspect old-k8s-version-449797 --format={{.State.Status}}
	I1124 10:53:33.692620 2025822 machine.go:94] provisionDockerMachine start ...
	I1124 10:53:33.692896 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:33.716610 2025822 main.go:143] libmachine: Using SSH client type: native
	I1124 10:53:33.716942 2025822 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35275 <nil> <nil>}
	I1124 10:53:33.716952 2025822 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 10:53:33.717671 2025822 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 10:53:36.868701 2025822 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-449797
	
	I1124 10:53:36.868727 2025822 ubuntu.go:182] provisioning hostname "old-k8s-version-449797"
	I1124 10:53:36.868798 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:36.887052 2025822 main.go:143] libmachine: Using SSH client type: native
	I1124 10:53:36.887383 2025822 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35275 <nil> <nil>}
	I1124 10:53:36.887402 2025822 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-449797 && echo "old-k8s-version-449797" | sudo tee /etc/hostname
	I1124 10:53:37.047721 2025822 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-449797
	
	I1124 10:53:37.047802 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:37.065549 2025822 main.go:143] libmachine: Using SSH client type: native
	I1124 10:53:37.065868 2025822 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35275 <nil> <nil>}
	I1124 10:53:37.065891 2025822 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-449797' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-449797/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-449797' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 10:53:37.221903 2025822 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 10:53:37.221981 2025822 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 10:53:37.222053 2025822 ubuntu.go:190] setting up certificates
	I1124 10:53:37.222082 2025822 provision.go:84] configureAuth start
	I1124 10:53:37.222196 2025822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-449797
	I1124 10:53:37.246644 2025822 provision.go:143] copyHostCerts
	I1124 10:53:37.246725 2025822 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 10:53:37.246734 2025822 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 10:53:37.246821 2025822 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 10:53:37.246934 2025822 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 10:53:37.246941 2025822 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 10:53:37.246968 2025822 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 10:53:37.247034 2025822 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 10:53:37.247043 2025822 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 10:53:37.247077 2025822 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 10:53:37.247134 2025822 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-449797 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-449797]
	I1124 10:53:37.486764 2025822 provision.go:177] copyRemoteCerts
	I1124 10:53:37.486835 2025822 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 10:53:37.486884 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:37.504982 2025822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35275 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa Username:docker}
	I1124 10:53:37.614060 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 10:53:37.633564 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 10:53:37.652239 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 10:53:37.670367 2025822 provision.go:87] duration metric: took 448.249263ms to configureAuth
	I1124 10:53:37.670394 2025822 ubuntu.go:206] setting minikube options for container-runtime
	I1124 10:53:37.670599 2025822 config.go:182] Loaded profile config "old-k8s-version-449797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 10:53:37.670706 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:37.689210 2025822 main.go:143] libmachine: Using SSH client type: native
	I1124 10:53:37.689569 2025822 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35275 <nil> <nil>}
	I1124 10:53:37.689591 2025822 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 10:53:37.981248 2025822 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 10:53:37.981275 2025822 machine.go:97] duration metric: took 4.288634921s to provisionDockerMachine
	I1124 10:53:37.981285 2025822 client.go:176] duration metric: took 10.734266329s to LocalClient.Create
	I1124 10:53:37.981329 2025822 start.go:167] duration metric: took 10.734356512s to libmachine.API.Create "old-k8s-version-449797"
	I1124 10:53:37.981343 2025822 start.go:293] postStartSetup for "old-k8s-version-449797" (driver="docker")
	I1124 10:53:37.981353 2025822 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 10:53:37.981437 2025822 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 10:53:37.981498 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:38.006204 2025822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35275 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa Username:docker}
	I1124 10:53:38.114007 2025822 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 10:53:38.118012 2025822 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 10:53:38.118104 2025822 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 10:53:38.118131 2025822 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 10:53:38.118225 2025822 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 10:53:38.118330 2025822 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 10:53:38.118443 2025822 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 10:53:38.126376 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:53:38.144537 2025822 start.go:296] duration metric: took 163.179666ms for postStartSetup
	I1124 10:53:38.144906 2025822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-449797
	I1124 10:53:38.162662 2025822 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/config.json ...
	I1124 10:53:38.162966 2025822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:53:38.163018 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:38.180146 2025822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35275 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa Username:docker}
	I1124 10:53:38.282084 2025822 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 10:53:38.286723 2025822 start.go:128] duration metric: took 11.043402692s to createHost
	I1124 10:53:38.286745 2025822 start.go:83] releasing machines lock for "old-k8s-version-449797", held for 11.043531883s
	I1124 10:53:38.286823 2025822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-449797
	I1124 10:53:38.303254 2025822 ssh_runner.go:195] Run: cat /version.json
	I1124 10:53:38.303376 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:38.303665 2025822 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 10:53:38.303726 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:53:38.325330 2025822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35275 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa Username:docker}
	I1124 10:53:38.326023 2025822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35275 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa Username:docker}
	I1124 10:53:38.518232 2025822 ssh_runner.go:195] Run: systemctl --version
	I1124 10:53:38.525160 2025822 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 10:53:38.560825 2025822 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 10:53:38.565495 2025822 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 10:53:38.565638 2025822 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 10:53:38.594095 2025822 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 10:53:38.594172 2025822 start.go:496] detecting cgroup driver to use...
	I1124 10:53:38.594241 2025822 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 10:53:38.594319 2025822 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 10:53:38.612647 2025822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 10:53:38.626019 2025822 docker.go:218] disabling cri-docker service (if available) ...
	I1124 10:53:38.626085 2025822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 10:53:38.644449 2025822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 10:53:38.664048 2025822 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 10:53:38.813909 2025822 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 10:53:38.935872 2025822 docker.go:234] disabling docker service ...
	I1124 10:53:38.935959 2025822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 10:53:38.957396 2025822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 10:53:38.970710 2025822 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 10:53:39.095419 2025822 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 10:53:39.209404 2025822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 10:53:39.223888 2025822 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 10:53:39.239345 2025822 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 10:53:39.239444 2025822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:53:39.248617 2025822 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 10:53:39.248754 2025822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:53:39.258011 2025822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:53:39.267610 2025822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:53:39.277038 2025822 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 10:53:39.285813 2025822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:53:39.294940 2025822 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:53:39.308466 2025822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:53:39.317890 2025822 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 10:53:39.326268 2025822 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 10:53:39.333981 2025822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:53:39.450659 2025822 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 10:53:39.635813 2025822 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 10:53:39.635917 2025822 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 10:53:39.639618 2025822 start.go:564] Will wait 60s for crictl version
	I1124 10:53:39.639706 2025822 ssh_runner.go:195] Run: which crictl
	I1124 10:53:39.643224 2025822 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 10:53:39.672940 2025822 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 10:53:39.673053 2025822 ssh_runner.go:195] Run: crio --version
	I1124 10:53:39.700752 2025822 ssh_runner.go:195] Run: crio --version
	I1124 10:53:39.735024 2025822 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 10:53:39.737960 2025822 cli_runner.go:164] Run: docker network inspect old-k8s-version-449797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 10:53:39.754254 2025822 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 10:53:39.758580 2025822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 10:53:39.768593 2025822 kubeadm.go:884] updating cluster {Name:old-k8s-version-449797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-449797 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 10:53:39.768712 2025822 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 10:53:39.768763 2025822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 10:53:39.806476 2025822 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 10:53:39.806500 2025822 crio.go:433] Images already preloaded, skipping extraction
	I1124 10:53:39.806557 2025822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 10:53:39.832196 2025822 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 10:53:39.832223 2025822 cache_images.go:86] Images are preloaded, skipping loading
	I1124 10:53:39.832231 2025822 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1124 10:53:39.832369 2025822 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-449797 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-449797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 10:53:39.832481 2025822 ssh_runner.go:195] Run: crio config
	I1124 10:53:39.900542 2025822 cni.go:84] Creating CNI manager for ""
	I1124 10:53:39.900569 2025822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:53:39.900589 2025822 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 10:53:39.900613 2025822 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-449797 NodeName:old-k8s-version-449797 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 10:53:39.900757 2025822 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-449797"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 10:53:39.900835 2025822 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 10:53:39.908892 2025822 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 10:53:39.908976 2025822 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 10:53:39.916886 2025822 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 10:53:39.930044 2025822 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 10:53:39.946260 2025822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1124 10:53:39.959505 2025822 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 10:53:39.963355 2025822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 10:53:39.973321 2025822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:53:40.095388 2025822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 10:53:40.113118 2025822 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797 for IP: 192.168.76.2
	I1124 10:53:40.113139 2025822 certs.go:195] generating shared ca certs ...
	I1124 10:53:40.113157 2025822 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:40.113387 2025822 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 10:53:40.113437 2025822 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 10:53:40.113445 2025822 certs.go:257] generating profile certs ...
	I1124 10:53:40.113519 2025822 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/client.key
	I1124 10:53:40.113532 2025822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/client.crt with IP's: []
	I1124 10:53:40.239303 2025822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/client.crt ...
	I1124 10:53:40.239335 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/client.crt: {Name:mkbd6e8e4f3b6f7ee846be841ab19d3c6fc932a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:40.239586 2025822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/client.key ...
	I1124 10:53:40.239605 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/client.key: {Name:mk1aa5b5f02af1fe78aeb3e7b605c729ac352fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:40.239771 2025822 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.key.ca883590
	I1124 10:53:40.239795 2025822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.crt.ca883590 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 10:53:40.537965 2025822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.crt.ca883590 ...
	I1124 10:53:40.537998 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.crt.ca883590: {Name:mk8f1b691632b5230a9c62d58070d13877791031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:40.538217 2025822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.key.ca883590 ...
	I1124 10:53:40.538236 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.key.ca883590: {Name:mk9e639049f4833557fd23250d51b1113be819e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:40.538331 2025822 certs.go:382] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.crt.ca883590 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.crt
	I1124 10:53:40.538411 2025822 certs.go:386] copying /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.key.ca883590 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.key
	I1124 10:53:40.538473 2025822 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.key
	I1124 10:53:40.538493 2025822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.crt with IP's: []
	I1124 10:53:40.940799 2025822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.crt ...
	I1124 10:53:40.940833 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.crt: {Name:mk7c66d931e640e7ae1cbc0062bebf0d04781886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:40.941029 2025822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.key ...
	I1124 10:53:40.941045 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.key: {Name:mkf8fe58410ef40bf2880936a8f7e96ce9c7282c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:53:40.941293 2025822 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 10:53:40.941344 2025822 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 10:53:40.941357 2025822 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 10:53:40.941387 2025822 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 10:53:40.941425 2025822 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 10:53:40.941457 2025822 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 10:53:40.941507 2025822 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:53:40.942079 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 10:53:40.975220 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 10:53:41.020703 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 10:53:41.044460 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 10:53:41.065014 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 10:53:41.083693 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 10:53:41.101053 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 10:53:41.118726 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/old-k8s-version-449797/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 10:53:41.136578 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 10:53:41.154877 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 10:53:41.172701 2025822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 10:53:41.189586 2025822 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 10:53:41.202295 2025822 ssh_runner.go:195] Run: openssl version
	I1124 10:53:41.208935 2025822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 10:53:41.217238 2025822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 10:53:41.220963 2025822 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 10:53:41.221029 2025822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 10:53:41.262054 2025822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 10:53:41.270388 2025822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 10:53:41.278704 2025822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 10:53:41.282672 2025822 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 10:53:41.282737 2025822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 10:53:41.323583 2025822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 10:53:41.331999 2025822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 10:53:41.340810 2025822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:53:41.344516 2025822 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:53:41.344583 2025822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:53:41.388492 2025822 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 10:53:41.397011 2025822 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 10:53:41.400483 2025822 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 10:53:41.400570 2025822 kubeadm.go:401] StartCluster: {Name:old-k8s-version-449797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-449797 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:53:41.400657 2025822 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 10:53:41.400721 2025822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 10:53:41.431609 2025822 cri.go:89] found id: ""
	I1124 10:53:41.431750 2025822 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 10:53:41.439899 2025822 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 10:53:41.448047 2025822 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 10:53:41.448116 2025822 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 10:53:41.456555 2025822 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 10:53:41.456578 2025822 kubeadm.go:158] found existing configuration files:
	
	I1124 10:53:41.456681 2025822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 10:53:41.464848 2025822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 10:53:41.464919 2025822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 10:53:41.472652 2025822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 10:53:41.480834 2025822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 10:53:41.480948 2025822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 10:53:41.488536 2025822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 10:53:41.497394 2025822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 10:53:41.497488 2025822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 10:53:41.505060 2025822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 10:53:41.513018 2025822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 10:53:41.513135 2025822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 10:53:41.521863 2025822 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 10:53:41.568566 2025822 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 10:53:41.568900 2025822 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 10:53:41.606616 2025822 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 10:53:41.606692 2025822 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 10:53:41.606731 2025822 kubeadm.go:319] OS: Linux
	I1124 10:53:41.606778 2025822 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 10:53:41.606827 2025822 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 10:53:41.606876 2025822 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 10:53:41.606925 2025822 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 10:53:41.606975 2025822 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 10:53:41.607028 2025822 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 10:53:41.607074 2025822 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 10:53:41.607123 2025822 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 10:53:41.607177 2025822 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 10:53:41.691992 2025822 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 10:53:41.692146 2025822 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 10:53:41.692275 2025822 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 10:53:41.895115 2025822 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 10:53:41.901939 2025822 out.go:252]   - Generating certificates and keys ...
	I1124 10:53:41.902095 2025822 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 10:53:41.902187 2025822 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 10:53:42.758577 2025822 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 10:53:44.210430 2025822 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 10:53:44.496914 2025822 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 10:53:44.956059 2025822 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 10:53:45.854334 2025822 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 10:53:45.854744 2025822 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-449797] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 10:53:46.018577 2025822 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 10:53:46.018995 2025822 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-449797] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 10:53:46.367652 2025822 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 10:53:47.181217 2025822 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 10:53:47.738671 2025822 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 10:53:47.738764 2025822 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 10:53:48.035038 2025822 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 10:53:48.446972 2025822 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 10:53:49.315076 2025822 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 10:53:49.791644 2025822 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 10:53:49.792383 2025822 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 10:53:49.795182 2025822 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 10:53:49.798842 2025822 out.go:252]   - Booting up control plane ...
	I1124 10:53:49.798972 2025822 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 10:53:49.799100 2025822 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 10:53:49.800248 2025822 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 10:53:49.817177 2025822 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 10:53:49.818496 2025822 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 10:53:49.818545 2025822 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 10:53:49.961703 2025822 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 10:53:57.965153 2025822 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.003537 seconds
	I1124 10:53:57.965345 2025822 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 10:53:57.979954 2025822 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 10:53:58.507469 2025822 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 10:53:58.507741 2025822 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-449797 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 10:53:59.030048 2025822 kubeadm.go:319] [bootstrap-token] Using token: rim0xb.762olv81rqwqsc7x
	I1124 10:53:59.033247 2025822 out.go:252]   - Configuring RBAC rules ...
	I1124 10:53:59.033390 2025822 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 10:53:59.041091 2025822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 10:53:59.049936 2025822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 10:53:59.054599 2025822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 10:53:59.060983 2025822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 10:53:59.065472 2025822 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 10:53:59.080706 2025822 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 10:53:59.353193 2025822 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 10:53:59.448926 2025822 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 10:53:59.449984 2025822 kubeadm.go:319] 
	I1124 10:53:59.450064 2025822 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 10:53:59.450069 2025822 kubeadm.go:319] 
	I1124 10:53:59.450147 2025822 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 10:53:59.450151 2025822 kubeadm.go:319] 
	I1124 10:53:59.450176 2025822 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 10:53:59.450235 2025822 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 10:53:59.450288 2025822 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 10:53:59.450293 2025822 kubeadm.go:319] 
	I1124 10:53:59.450346 2025822 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 10:53:59.450350 2025822 kubeadm.go:319] 
	I1124 10:53:59.450397 2025822 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 10:53:59.450401 2025822 kubeadm.go:319] 
	I1124 10:53:59.450453 2025822 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 10:53:59.450528 2025822 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 10:53:59.450597 2025822 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 10:53:59.450601 2025822 kubeadm.go:319] 
	I1124 10:53:59.450689 2025822 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 10:53:59.450765 2025822 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 10:53:59.450769 2025822 kubeadm.go:319] 
	I1124 10:53:59.450852 2025822 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rim0xb.762olv81rqwqsc7x \
	I1124 10:53:59.450956 2025822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5d16c010d48f473ef9a89b08092f440407a6e7096b121b775134bbe2ddebd722 \
	I1124 10:53:59.450976 2025822 kubeadm.go:319] 	--control-plane 
	I1124 10:53:59.450980 2025822 kubeadm.go:319] 
	I1124 10:53:59.451072 2025822 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 10:53:59.451077 2025822 kubeadm.go:319] 
	I1124 10:53:59.451159 2025822 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rim0xb.762olv81rqwqsc7x \
	I1124 10:53:59.451263 2025822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5d16c010d48f473ef9a89b08092f440407a6e7096b121b775134bbe2ddebd722 
	I1124 10:53:59.455192 2025822 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:53:59.455305 2025822 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:53:59.455321 2025822 cni.go:84] Creating CNI manager for ""
	I1124 10:53:59.455328 2025822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:53:59.458630 2025822 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 10:53:59.461557 2025822 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 10:53:59.467050 2025822 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 10:53:59.467070 2025822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 10:53:59.482124 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 10:54:00.746106 2025822 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.263945198s)
	I1124 10:54:00.746151 2025822 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 10:54:00.746267 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:00.746359 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-449797 minikube.k8s.io/updated_at=2025_11_24T10_54_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=old-k8s-version-449797 minikube.k8s.io/primary=true
	I1124 10:54:00.894101 2025822 ops.go:34] apiserver oom_adj: -16
	I1124 10:54:00.894229 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:01.395036 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:01.895055 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:02.395102 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:02.894593 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:03.394784 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:03.895291 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:04.395227 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:04.894989 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:05.395280 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:05.894326 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:06.395231 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:06.894365 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:07.394603 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:07.895121 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:08.395178 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:08.895111 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:09.394437 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:09.894322 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:10.395138 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:10.894292 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:11.394413 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:11.894403 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:12.394530 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:12.894859 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:13.395133 2025822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 10:54:13.502795 2025822 kubeadm.go:1114] duration metric: took 12.756574347s to wait for elevateKubeSystemPrivileges
	I1124 10:54:13.502823 2025822 kubeadm.go:403] duration metric: took 32.102258248s to StartCluster
	I1124 10:54:13.502842 2025822 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:54:13.502903 2025822 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:54:13.503767 2025822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:54:13.504632 2025822 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 10:54:13.504742 2025822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 10:54:13.505010 2025822 config.go:182] Loaded profile config "old-k8s-version-449797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 10:54:13.505056 2025822 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 10:54:13.505155 2025822 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-449797"
	I1124 10:54:13.505181 2025822 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-449797"
	I1124 10:54:13.505216 2025822 host.go:66] Checking if "old-k8s-version-449797" exists ...
	I1124 10:54:13.505868 2025822 cli_runner.go:164] Run: docker container inspect old-k8s-version-449797 --format={{.State.Status}}
	I1124 10:54:13.505867 2025822 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-449797"
	I1124 10:54:13.505959 2025822 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-449797"
	I1124 10:54:13.506272 2025822 cli_runner.go:164] Run: docker container inspect old-k8s-version-449797 --format={{.State.Status}}
	I1124 10:54:13.508730 2025822 out.go:179] * Verifying Kubernetes components...
	I1124 10:54:13.511837 2025822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:54:13.553208 2025822 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 10:54:13.557843 2025822 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-449797"
	I1124 10:54:13.557899 2025822 host.go:66] Checking if "old-k8s-version-449797" exists ...
	I1124 10:54:13.558351 2025822 cli_runner.go:164] Run: docker container inspect old-k8s-version-449797 --format={{.State.Status}}
	I1124 10:54:13.558611 2025822 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 10:54:13.558636 2025822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 10:54:13.558690 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:54:13.590978 2025822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35275 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa Username:docker}
	I1124 10:54:13.597251 2025822 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 10:54:13.597280 2025822 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 10:54:13.597346 2025822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-449797
	I1124 10:54:13.628821 2025822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35275 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/old-k8s-version-449797/id_rsa Username:docker}
	I1124 10:54:13.778070 2025822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 10:54:13.778197 2025822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 10:54:13.821329 2025822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 10:54:13.825308 2025822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 10:54:14.681006 2025822 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-449797" to be "Ready" ...
	I1124 10:54:14.681364 2025822 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 10:54:15.170575 2025822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.345167615s)
	I1124 10:54:15.170962 2025822 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.34955371s)
	I1124 10:54:15.183479 2025822 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 10:54:15.186380 2025822 addons.go:530] duration metric: took 1.681320265s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 10:54:15.187222 2025822 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-449797" context rescaled to 1 replicas
	W1124 10:54:16.684735 2025822 node_ready.go:57] node "old-k8s-version-449797" has "Ready":"False" status (will retry)
	W1124 10:54:19.184002 2025822 node_ready.go:57] node "old-k8s-version-449797" has "Ready":"False" status (will retry)
	W1124 10:54:21.684738 2025822 node_ready.go:57] node "old-k8s-version-449797" has "Ready":"False" status (will retry)
	I1124 10:54:26.767390 1986432 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001032204s
	I1124 10:54:26.767428 1986432 kubeadm.go:319] 
	I1124 10:54:26.767483 1986432 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1124 10:54:26.767521 1986432 kubeadm.go:319] 	- The kubelet is not running
	I1124 10:54:26.767623 1986432 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1124 10:54:26.767632 1986432 kubeadm.go:319] 
	I1124 10:54:26.767731 1986432 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1124 10:54:26.767765 1986432 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1124 10:54:26.767798 1986432 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1124 10:54:26.767805 1986432 kubeadm.go:319] 
	I1124 10:54:26.771960 1986432 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 10:54:26.772486 1986432 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1124 10:54:26.772609 1986432 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 10:54:26.772893 1986432 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1124 10:54:26.772931 1986432 kubeadm.go:319] 
	I1124 10:54:26.773067 1986432 kubeadm.go:403] duration metric: took 12m13.436885573s to StartCluster
	I1124 10:54:26.773197 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:54:26.773210 1986432 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1124 10:54:26.773259 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:54:26.802426 1986432 cri.go:89] found id: ""
	I1124 10:54:26.802452 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.802461 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:54:26.802469 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:54:26.802542 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:54:26.833419 1986432 cri.go:89] found id: ""
	I1124 10:54:26.833447 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.833457 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:54:26.833464 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:54:26.833527 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:54:26.861824 1986432 cri.go:89] found id: ""
	I1124 10:54:26.861854 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.861864 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:54:26.861871 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:54:26.861936 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1124 10:54:24.184031 2025822 node_ready.go:57] node "old-k8s-version-449797" has "Ready":"False" status (will retry)
	W1124 10:54:26.684068 2025822 node_ready.go:57] node "old-k8s-version-449797" has "Ready":"False" status (will retry)
	I1124 10:54:26.893971 1986432 cri.go:89] found id: ""
	I1124 10:54:26.893996 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.894006 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:54:26.894013 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:54:26.894089 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:54:26.922120 1986432 cri.go:89] found id: ""
	I1124 10:54:26.922145 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.922155 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:54:26.922161 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:54:26.922218 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:54:26.949491 1986432 cri.go:89] found id: ""
	I1124 10:54:26.949514 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.949523 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:54:26.949530 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:54:26.949590 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:54:26.975692 1986432 cri.go:89] found id: ""
	I1124 10:54:26.975718 1986432 logs.go:282] 0 containers: []
	W1124 10:54:26.975727 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:54:26.975733 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:54:26.975792 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:54:27.005263 1986432 cri.go:89] found id: ""
	I1124 10:54:27.005289 1986432 logs.go:282] 0 containers: []
	W1124 10:54:27.005299 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:54:27.005309 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:54:27.005322 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:54:27.080411 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:54:27.080449 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:54:27.098348 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:54:27.098377 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:54:27.168103 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:54:27.168127 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:54:27.168141 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:54:27.215650 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:54:27.215685 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:54:27.247499 1986432 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1124 10:54:27.247551 1986432 out.go:285] * 
	W1124 10:54:27.247629 1986432 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:54:27.247882 1986432 out.go:285] * 
	W1124 10:54:27.250129 1986432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:54:27.255665 1986432 out.go:203] 
	W1124 10:54:27.258526 1986432 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001032204s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1124 10:54:27.258575 1986432 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1124 10:54:27.258596 1986432 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1124 10:54:27.261630 1986432 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 10:41:56 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:41:56.45682162Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=dd270918-2289-49d0-98d8-779b58f35493 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:41:56 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:41:56.456893974Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=dd270918-2289-49d0-98d8-779b58f35493 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:41:56 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:41:56.552857115Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.5.24-0" id=0ee6845e-343d-4842-a571-96c5655bc577 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:41:56 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:41:56.553143258Z" level=info msg="Image registry.k8s.io/etcd:3.5.24-0 not found" id=0ee6845e-343d-4842-a571-96c5655bc577 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:41:56 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:41:56.553202484Z" level=info msg="Neither image nor artfiact registry.k8s.io/etcd:3.5.24-0 found" id=0ee6845e-343d-4842-a571-96c5655bc577 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:41:57 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:41:57.054106752Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=102880e4-e13f-4aa7-81db-7fe4f7440488 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.817729805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=abe054f2-de30-44c3-8b80-36fe61c26614 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.82195197Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=23bc8cd2-cb55-4017-b213-b1aaa10949d6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.823563909Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=527a6297-ea4b-4245-be2a-62ebd7824baf name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.826058752Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=25636d57-fa12-46ae-b828-99eefbe1b5f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.826975222Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56ab32ac-245d-4e1a-992d-a92ef0584487 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.828215127Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a926c80e-1f6f-48cf-8f48-0f7e9b9bd472 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.830127117Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1cc2a25a-b592-49e8-845f-587d0e8b8f02 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.830231816Z" level=info msg="Image registry.k8s.io/etcd:3.6.5-0 not found" id=1cc2a25a-b592-49e8-845f-587d0e8b8f02 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.830283444Z" level=info msg="Neither image nor artfiact registry.k8s.io/etcd:3.6.5-0 found" id=1cc2a25a-b592-49e8-845f-587d0e8b8f02 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.830720653Z" level=info msg="Pulling image: registry.k8s.io/etcd:3.6.5-0" id=77241cb1-63af-447e-b1df-f2019d2a2e71 name=/runtime.v1.ImageService/PullImage
	Nov 24 10:46:18 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:18.832734905Z" level=info msg="Trying to access \"registry.k8s.io/etcd:3.6.5-0\""
	Nov 24 10:46:21 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:46:21.906588687Z" level=info msg="Pulled image: registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e" id=77241cb1-63af-447e-b1df-f2019d2a2e71 name=/runtime.v1.ImageService/PullImage
	Nov 24 10:50:24 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:50:24.493937326Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f3963643-070c-4eb2-bcc9-985fbe308b7a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:50:24 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:50:24.495546384Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=97c2a02a-fd57-40c0-8ca4-c7bc1b4ad8a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:50:24 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:50:24.497022221Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=62abba30-909b-4044-9873-b302b934327f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:50:24 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:50:24.498585058Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d5ed4ffd-28e6-4412-aca7-51c9d415fa9f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:50:24 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:50:24.50021106Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1b033491-0e2a-496c-ac78-f25fc8b79225 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:50:24 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:50:24.501764108Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=982deb84-09c8-4c36-a89b-0e55bede8868 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 10:50:24 kubernetes-upgrade-306449 crio[614]: time="2025-11-24T10:50:24.502640642Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=621f09a7-6d52-4db3-be9c-fa6960ad20f1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.367824] overlayfs: idmapped layers are currently not supported
	[Nov24 10:21] overlayfs: idmapped layers are currently not supported
	[Nov24 10:26] overlayfs: idmapped layers are currently not supported
	[ +33.890897] overlayfs: idmapped layers are currently not supported
	[Nov24 10:28] overlayfs: idmapped layers are currently not supported
	[Nov24 10:29] overlayfs: idmapped layers are currently not supported
	[Nov24 10:30] overlayfs: idmapped layers are currently not supported
	[Nov24 10:32] overlayfs: idmapped layers are currently not supported
	[ +26.643756] overlayfs: idmapped layers are currently not supported
	[  +9.285653] overlayfs: idmapped layers are currently not supported
	[Nov24 10:33] overlayfs: idmapped layers are currently not supported
	[ +18.325038] overlayfs: idmapped layers are currently not supported
	[Nov24 10:34] overlayfs: idmapped layers are currently not supported
	[Nov24 10:35] overlayfs: idmapped layers are currently not supported
	[Nov24 10:36] overlayfs: idmapped layers are currently not supported
	[Nov24 10:37] overlayfs: idmapped layers are currently not supported
	[Nov24 10:39] overlayfs: idmapped layers are currently not supported
	[Nov24 10:41] overlayfs: idmapped layers are currently not supported
	[ +25.006505] overlayfs: idmapped layers are currently not supported
	[Nov24 10:44] overlayfs: idmapped layers are currently not supported
	[Nov24 10:46] overlayfs: idmapped layers are currently not supported
	[Nov24 10:47] overlayfs: idmapped layers are currently not supported
	[ +37.239918] overlayfs: idmapped layers are currently not supported
	[Nov24 10:53] overlayfs: idmapped layers are currently not supported
	[ +36.583885] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:54:29 up  9:36,  0 user,  load average: 3.31, 2.25, 2.11
	Linux kubernetes-upgrade-306449 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 24 10:54:26 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:54:27 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Nov 24 10:54:27 kubernetes-upgrade-306449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:54:27 kubernetes-upgrade-306449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:54:27 kubernetes-upgrade-306449 kubelet[12905]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:54:27 kubernetes-upgrade-306449 kubelet[12905]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:54:27 kubernetes-upgrade-306449 kubelet[12905]: E1124 10:54:27.535681   12905 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:54:27 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:54:27 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:54:28 kubernetes-upgrade-306449 kubelet[12911]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:54:28 kubernetes-upgrade-306449 kubelet[12911]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:54:28 kubernetes-upgrade-306449 kubelet[12911]: E1124 10:54:28.284254   12911 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:54:28 kubernetes-upgrade-306449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 10:54:29 kubernetes-upgrade-306449 kubelet[12975]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:54:29 kubernetes-upgrade-306449 kubelet[12975]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Nov 24 10:54:29 kubernetes-upgrade-306449 kubelet[12975]: E1124 10:54:29.020193   12975 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Nov 24 10:54:29 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 24 10:54:29 kubernetes-upgrade-306449 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-306449 -n kubernetes-upgrade-306449
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-306449 -n kubernetes-upgrade-306449: exit status 2 (354.953826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-306449" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-306449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-306449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-306449: (2.565003714s)
--- FAIL: TestKubernetesUpgrade (802.35s)

                                                
                                    
x
+
TestPause/serial/Pause (7.44s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-245240 --alsologtostderr -v=5
E1124 10:45:53.142274 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-245240 --alsologtostderr -v=5: exit status 80 (2.562606411s)

                                                
                                                
-- stdout --
	* Pausing node pause-245240 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:45:51.741476 2007314 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:45:51.742296 2007314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:45:51.742329 2007314 out.go:374] Setting ErrFile to fd 2...
	I1124 10:45:51.742352 2007314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:45:51.742647 2007314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:45:51.742942 2007314 out.go:368] Setting JSON to false
	I1124 10:45:51.742989 2007314 mustload.go:66] Loading cluster: pause-245240
	I1124 10:45:51.743530 2007314 config.go:182] Loaded profile config "pause-245240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:45:51.744039 2007314 cli_runner.go:164] Run: docker container inspect pause-245240 --format={{.State.Status}}
	I1124 10:45:51.769878 2007314 host.go:66] Checking if "pause-245240" exists ...
	I1124 10:45:51.770278 2007314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:45:51.841573 2007314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 10:45:51.831468812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:45:51.842301 2007314 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-245240 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 10:45:51.845482 2007314 out.go:179] * Pausing node pause-245240 ... 
	I1124 10:45:51.849251 2007314 host.go:66] Checking if "pause-245240" exists ...
	I1124 10:45:51.849600 2007314 ssh_runner.go:195] Run: systemctl --version
	I1124 10:45:51.849651 2007314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:51.868510 2007314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:51.971553 2007314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:45:51.983918 2007314 pause.go:52] kubelet running: true
	I1124 10:45:51.983996 2007314 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 10:45:52.200707 2007314 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 10:45:52.200805 2007314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 10:45:52.273554 2007314 cri.go:89] found id: "7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775"
	I1124 10:45:52.273580 2007314 cri.go:89] found id: "97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23"
	I1124 10:45:52.273586 2007314 cri.go:89] found id: "993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82"
	I1124 10:45:52.273591 2007314 cri.go:89] found id: "632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0"
	I1124 10:45:52.273595 2007314 cri.go:89] found id: "25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d"
	I1124 10:45:52.273598 2007314 cri.go:89] found id: "4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36"
	I1124 10:45:52.273601 2007314 cri.go:89] found id: "7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81"
	I1124 10:45:52.273604 2007314 cri.go:89] found id: "c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937"
	I1124 10:45:52.273607 2007314 cri.go:89] found id: "4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f"
	I1124 10:45:52.273622 2007314 cri.go:89] found id: "e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c"
	I1124 10:45:52.273628 2007314 cri.go:89] found id: "a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	I1124 10:45:52.273632 2007314 cri.go:89] found id: "c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b"
	I1124 10:45:52.273635 2007314 cri.go:89] found id: "dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6"
	I1124 10:45:52.273638 2007314 cri.go:89] found id: "4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	I1124 10:45:52.273641 2007314 cri.go:89] found id: ""
	I1124 10:45:52.273692 2007314 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 10:45:52.285054 2007314 retry.go:31] will retry after 305.825794ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T10:45:52Z" level=error msg="open /run/runc: no such file or directory"
	I1124 10:45:52.591659 2007314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:45:52.604658 2007314 pause.go:52] kubelet running: false
	I1124 10:45:52.604736 2007314 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 10:45:52.751693 2007314 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 10:45:52.751803 2007314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 10:45:52.822739 2007314 cri.go:89] found id: "7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775"
	I1124 10:45:52.822771 2007314 cri.go:89] found id: "97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23"
	I1124 10:45:52.822776 2007314 cri.go:89] found id: "993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82"
	I1124 10:45:52.822780 2007314 cri.go:89] found id: "632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0"
	I1124 10:45:52.822783 2007314 cri.go:89] found id: "25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d"
	I1124 10:45:52.822787 2007314 cri.go:89] found id: "4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36"
	I1124 10:45:52.822790 2007314 cri.go:89] found id: "7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81"
	I1124 10:45:52.822794 2007314 cri.go:89] found id: "c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937"
	I1124 10:45:52.822798 2007314 cri.go:89] found id: "4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f"
	I1124 10:45:52.822804 2007314 cri.go:89] found id: "e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c"
	I1124 10:45:52.822810 2007314 cri.go:89] found id: "a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	I1124 10:45:52.822814 2007314 cri.go:89] found id: "c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b"
	I1124 10:45:52.822817 2007314 cri.go:89] found id: "dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6"
	I1124 10:45:52.822820 2007314 cri.go:89] found id: "4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	I1124 10:45:52.822824 2007314 cri.go:89] found id: ""
	I1124 10:45:52.822874 2007314 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 10:45:52.833837 2007314 retry.go:31] will retry after 492.265091ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T10:45:52Z" level=error msg="open /run/runc: no such file or directory"
	I1124 10:45:53.326339 2007314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:45:53.340871 2007314 pause.go:52] kubelet running: false
	I1124 10:45:53.340951 2007314 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 10:45:53.552477 2007314 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 10:45:53.552662 2007314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 10:45:53.647201 2007314 cri.go:89] found id: "7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775"
	I1124 10:45:53.647232 2007314 cri.go:89] found id: "97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23"
	I1124 10:45:53.647238 2007314 cri.go:89] found id: "993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82"
	I1124 10:45:53.647241 2007314 cri.go:89] found id: "632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0"
	I1124 10:45:53.647245 2007314 cri.go:89] found id: "25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d"
	I1124 10:45:53.647248 2007314 cri.go:89] found id: "4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36"
	I1124 10:45:53.647251 2007314 cri.go:89] found id: "7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81"
	I1124 10:45:53.647254 2007314 cri.go:89] found id: "c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937"
	I1124 10:45:53.647257 2007314 cri.go:89] found id: "4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f"
	I1124 10:45:53.647263 2007314 cri.go:89] found id: "e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c"
	I1124 10:45:53.647285 2007314 cri.go:89] found id: "a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	I1124 10:45:53.647295 2007314 cri.go:89] found id: "c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b"
	I1124 10:45:53.647298 2007314 cri.go:89] found id: "dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6"
	I1124 10:45:53.647304 2007314 cri.go:89] found id: "4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	I1124 10:45:53.647307 2007314 cri.go:89] found id: ""
	I1124 10:45:53.647376 2007314 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 10:45:53.659432 2007314 retry.go:31] will retry after 290.071423ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T10:45:53Z" level=error msg="open /run/runc: no such file or directory"
	I1124 10:45:53.949925 2007314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:45:53.972202 2007314 pause.go:52] kubelet running: false
	I1124 10:45:53.972266 2007314 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 10:45:54.129454 2007314 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 10:45:54.129539 2007314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 10:45:54.196698 2007314 cri.go:89] found id: "7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775"
	I1124 10:45:54.196776 2007314 cri.go:89] found id: "97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23"
	I1124 10:45:54.196796 2007314 cri.go:89] found id: "993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82"
	I1124 10:45:54.196819 2007314 cri.go:89] found id: "632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0"
	I1124 10:45:54.196850 2007314 cri.go:89] found id: "25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d"
	I1124 10:45:54.196874 2007314 cri.go:89] found id: "4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36"
	I1124 10:45:54.196896 2007314 cri.go:89] found id: "7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81"
	I1124 10:45:54.196917 2007314 cri.go:89] found id: "c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937"
	I1124 10:45:54.196951 2007314 cri.go:89] found id: "4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f"
	I1124 10:45:54.196976 2007314 cri.go:89] found id: "e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c"
	I1124 10:45:54.196995 2007314 cri.go:89] found id: "a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	I1124 10:45:54.197015 2007314 cri.go:89] found id: "c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b"
	I1124 10:45:54.197036 2007314 cri.go:89] found id: "dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6"
	I1124 10:45:54.197069 2007314 cri.go:89] found id: "4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	I1124 10:45:54.197096 2007314 cri.go:89] found id: ""
	I1124 10:45:54.197195 2007314 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 10:45:54.211541 2007314 out.go:203] 
	W1124 10:45:54.214424 2007314 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T10:45:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T10:45:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 10:45:54.214444 2007314 out.go:285] * 
	* 
	W1124 10:45:54.225837 2007314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 10:45:54.228811 2007314 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-245240 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-245240
helpers_test.go:243: (dbg) docker inspect pause-245240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791",
	        "Created": "2025-11-24T10:44:02.923568892Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2000191,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T10:44:02.980828291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/hostname",
	        "HostsPath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/hosts",
	        "LogPath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791-json.log",
	        "Name": "/pause-245240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-245240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-245240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791",
	                "LowerDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-245240",
	                "Source": "/var/lib/docker/volumes/pause-245240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-245240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-245240",
	                "name.minikube.sigs.k8s.io": "pause-245240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed68fbed3a59dd9a9047448d39889069f51553da461f0060f3e243b6d81f2705",
	            "SandboxKey": "/var/run/docker/netns/ed68fbed3a59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35251"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-245240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:5a:b3:fe:90:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a758a0ee07b7ee5113db29e8c714def93fe09d2ec0934b199559745b56e483cb",
	                    "EndpointID": "4845dd1c20c4088333d71456d5a155a6ad8670ce6512ff1eaf7c2a73a1428d82",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-245240",
	                        "c8d2d9b65149"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-245240 -n pause-245240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-245240 -n pause-245240: exit status 2 (330.39794ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-245240 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-245240 logs -n 25: (1.433106595s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-538948 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:39 UTC │ 24 Nov 25 10:39 UTC │
	│ start   │ -p missing-upgrade-114074 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-114074    │ jenkins │ v1.32.0 │ 24 Nov 25 10:39 UTC │ 24 Nov 25 10:40 UTC │
	│ start   │ -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:39 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p missing-upgrade-114074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-114074    │ jenkins │ v1.37.0 │ 24 Nov 25 10:40 UTC │ 24 Nov 25 10:41 UTC │
	│ delete  │ -p missing-upgrade-114074                                                                                                                       │ missing-upgrade-114074    │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-306449 │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ delete  │ -p NoKubernetes-538948                                                                                                                          │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ ssh     │ -p NoKubernetes-538948 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │                     │
	│ stop    │ -p NoKubernetes-538948                                                                                                                          │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p NoKubernetes-538948 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ stop    │ -p kubernetes-upgrade-306449                                                                                                                    │ kubernetes-upgrade-306449 │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-306449 │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p NoKubernetes-538948 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │                     │
	│ delete  │ -p NoKubernetes-538948                                                                                                                          │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p stopped-upgrade-661807 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-661807    │ jenkins │ v1.32.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:42 UTC │
	│ stop    │ stopped-upgrade-661807 stop                                                                                                                     │ stopped-upgrade-661807    │ jenkins │ v1.32.0 │ 24 Nov 25 10:42 UTC │ 24 Nov 25 10:42 UTC │
	│ start   │ -p stopped-upgrade-661807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-661807    │ jenkins │ v1.37.0 │ 24 Nov 25 10:42 UTC │ 24 Nov 25 10:42 UTC │
	│ delete  │ -p stopped-upgrade-661807                                                                                                                       │ stopped-upgrade-661807    │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ start   │ -p running-upgrade-832076 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-832076    │ jenkins │ v1.32.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ start   │ -p running-upgrade-832076 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-832076    │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ delete  │ -p running-upgrade-832076                                                                                                                       │ running-upgrade-832076    │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ start   │ -p pause-245240 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-245240              │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:45 UTC │
	│ start   │ -p pause-245240 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-245240              │ jenkins │ v1.37.0 │ 24 Nov 25 10:45 UTC │ 24 Nov 25 10:45 UTC │
	│ pause   │ -p pause-245240 --alsologtostderr -v=5                                                                                                          │ pause-245240              │ jenkins │ v1.37.0 │ 24 Nov 25 10:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 10:45:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 10:45:21.794210 2005062 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:45:21.794510 2005062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:45:21.794540 2005062 out.go:374] Setting ErrFile to fd 2...
	I1124 10:45:21.794561 2005062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:45:21.794853 2005062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:45:21.795290 2005062 out.go:368] Setting JSON to false
	I1124 10:45:21.796420 2005062 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":34072,"bootTime":1763947050,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:45:21.796535 2005062 start.go:143] virtualization:  
	I1124 10:45:21.801466 2005062 out.go:179] * [pause-245240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 10:45:21.804644 2005062 notify.go:221] Checking for updates...
	I1124 10:45:21.807680 2005062 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:45:21.811049 2005062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:45:21.813939 2005062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:45:21.816913 2005062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:45:21.819770 2005062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:45:21.828575 2005062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:45:19.178462 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:19.188993 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:19.189059 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:19.230211 1986432 cri.go:89] found id: ""
	I1124 10:45:19.230233 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.230242 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:19.230249 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:19.230308 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:19.258132 1986432 cri.go:89] found id: ""
	I1124 10:45:19.258155 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.258176 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:19.258185 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:19.258246 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:19.285507 1986432 cri.go:89] found id: ""
	I1124 10:45:19.285531 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.285539 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:19.285547 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:19.285614 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:19.313743 1986432 cri.go:89] found id: ""
	I1124 10:45:19.313765 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.313774 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:19.313781 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:19.313841 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:19.341218 1986432 cri.go:89] found id: ""
	I1124 10:45:19.341248 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.341257 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:19.341265 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:19.341325 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:19.367950 1986432 cri.go:89] found id: ""
	I1124 10:45:19.367976 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.367985 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:19.367992 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:19.368053 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:19.394601 1986432 cri.go:89] found id: ""
	I1124 10:45:19.394627 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.394637 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:19.394644 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:19.394708 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:19.434228 1986432 cri.go:89] found id: ""
	I1124 10:45:19.434251 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.434260 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:19.434268 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:19.434280 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:19.493533 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:19.493559 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:19.577029 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:19.577071 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:19.595336 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:19.595370 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:19.665756 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:19.665779 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:19.665792 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:21.832167 2005062 config.go:182] Loaded profile config "pause-245240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:45:21.832778 2005062 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:45:21.862321 2005062 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:45:21.862438 2005062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:45:21.923234 2005062 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 10:45:21.913718497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:45:21.923388 2005062 docker.go:319] overlay module found
	I1124 10:45:21.926626 2005062 out.go:179] * Using the docker driver based on existing profile
	I1124 10:45:21.929602 2005062 start.go:309] selected driver: docker
	I1124 10:45:21.929625 2005062 start.go:927] validating driver "docker" against &{Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:45:21.929762 2005062 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:45:21.929869 2005062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:45:21.987497 2005062 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 10:45:21.977997109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:45:21.987903 2005062 cni.go:84] Creating CNI manager for ""
	I1124 10:45:21.987968 2005062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:45:21.988016 2005062 start.go:353] cluster config:
	{Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:45:21.991777 2005062 out.go:179] * Starting "pause-245240" primary control-plane node in "pause-245240" cluster
	I1124 10:45:21.994600 2005062 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 10:45:21.997511 2005062 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 10:45:22.001579 2005062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 10:45:22.001646 2005062 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1124 10:45:22.001659 2005062 cache.go:65] Caching tarball of preloaded images
	I1124 10:45:22.001703 2005062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 10:45:22.001785 2005062 preload.go:238] Found /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 10:45:22.001797 2005062 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 10:45:22.001955 2005062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/config.json ...
	I1124 10:45:22.022301 2005062 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 10:45:22.022326 2005062 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 10:45:22.022349 2005062 cache.go:243] Successfully downloaded all kic artifacts
	I1124 10:45:22.022395 2005062 start.go:360] acquireMachinesLock for pause-245240: {Name:mk98785f26338538e367a8dedc2aa1790321bc09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:45:22.022469 2005062 start.go:364] duration metric: took 52.752µs to acquireMachinesLock for "pause-245240"
	I1124 10:45:22.022494 2005062 start.go:96] Skipping create...Using existing machine configuration
	I1124 10:45:22.022506 2005062 fix.go:54] fixHost starting: 
	I1124 10:45:22.022792 2005062 cli_runner.go:164] Run: docker container inspect pause-245240 --format={{.State.Status}}
	I1124 10:45:22.040184 2005062 fix.go:112] recreateIfNeeded on pause-245240: state=Running err=<nil>
	W1124 10:45:22.040221 2005062 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 10:45:22.043440 2005062 out.go:252] * Updating the running docker "pause-245240" container ...
	I1124 10:45:22.043478 2005062 machine.go:94] provisionDockerMachine start ...
	I1124 10:45:22.043559 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.060516 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:22.060857 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:22.060872 2005062 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 10:45:22.213063 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-245240
	
	I1124 10:45:22.213130 2005062 ubuntu.go:182] provisioning hostname "pause-245240"
	I1124 10:45:22.213214 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.233628 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:22.233930 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:22.233940 2005062 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-245240 && echo "pause-245240" | sudo tee /etc/hostname
	I1124 10:45:22.408929 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-245240
	
	I1124 10:45:22.409060 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.430798 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:22.431112 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:22.431131 2005062 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-245240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-245240/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-245240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 10:45:22.601595 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 10:45:22.601638 2005062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 10:45:22.601710 2005062 ubuntu.go:190] setting up certificates
	I1124 10:45:22.601721 2005062 provision.go:84] configureAuth start
	I1124 10:45:22.601809 2005062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-245240
	I1124 10:45:22.635362 2005062 provision.go:143] copyHostCerts
	I1124 10:45:22.635440 2005062 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 10:45:22.635462 2005062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 10:45:22.635547 2005062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 10:45:22.635662 2005062 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 10:45:22.635671 2005062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 10:45:22.635700 2005062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 10:45:22.635806 2005062 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 10:45:22.635819 2005062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 10:45:22.635852 2005062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 10:45:22.635912 2005062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.pause-245240 san=[127.0.0.1 192.168.76.2 localhost minikube pause-245240]
	I1124 10:45:22.863758 2005062 provision.go:177] copyRemoteCerts
	I1124 10:45:22.863865 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 10:45:22.863928 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.884235 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:22.993562 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 10:45:23.013428 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 10:45:23.031677 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 10:45:23.050418 2005062 provision.go:87] duration metric: took 448.658841ms to configureAuth
	I1124 10:45:23.050448 2005062 ubuntu.go:206] setting minikube options for container-runtime
	I1124 10:45:23.050727 2005062 config.go:182] Loaded profile config "pause-245240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:45:23.050875 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:23.069143 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:23.069500 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:23.069525 2005062 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 10:45:22.216483 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:22.235923 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:22.235992 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:22.275278 1986432 cri.go:89] found id: ""
	I1124 10:45:22.275317 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.275330 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:22.275349 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:22.275425 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:22.321491 1986432 cri.go:89] found id: ""
	I1124 10:45:22.321518 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.321529 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:22.321536 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:22.321612 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:22.363465 1986432 cri.go:89] found id: ""
	I1124 10:45:22.363490 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.363499 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:22.363506 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:22.363568 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:22.399782 1986432 cri.go:89] found id: ""
	I1124 10:45:22.399808 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.399818 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:22.399825 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:22.399885 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:22.457990 1986432 cri.go:89] found id: ""
	I1124 10:45:22.458017 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.458025 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:22.458032 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:22.458092 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:22.517733 1986432 cri.go:89] found id: ""
	I1124 10:45:22.517759 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.517768 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:22.517775 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:22.517837 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:22.560877 1986432 cri.go:89] found id: ""
	I1124 10:45:22.560902 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.560911 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:22.560917 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:22.560974 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:22.592310 1986432 cri.go:89] found id: ""
	I1124 10:45:22.592344 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.592353 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:22.592362 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:22.592373 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:22.676382 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:22.676457 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:22.699655 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:22.699681 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:22.784335 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:22.784353 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:22.784365 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:22.840500 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:22.840577 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:25.377619 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:25.388294 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:25.388365 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:25.414360 1986432 cri.go:89] found id: ""
	I1124 10:45:25.414381 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.414390 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:25.414397 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:25.414454 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:25.441377 1986432 cri.go:89] found id: ""
	I1124 10:45:25.441403 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.441413 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:25.441420 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:25.441488 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:25.479479 1986432 cri.go:89] found id: ""
	I1124 10:45:25.479507 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.479516 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:25.479523 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:25.479581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:25.510312 1986432 cri.go:89] found id: ""
	I1124 10:45:25.510341 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.510349 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:25.510357 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:25.510416 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:25.541150 1986432 cri.go:89] found id: ""
	I1124 10:45:25.541175 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.541185 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:25.541192 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:25.541251 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:25.571686 1986432 cri.go:89] found id: ""
	I1124 10:45:25.571714 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.571723 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:25.571730 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:25.571790 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:25.598880 1986432 cri.go:89] found id: ""
	I1124 10:45:25.598901 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.598910 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:25.598917 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:25.598974 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:25.626005 1986432 cri.go:89] found id: ""
	I1124 10:45:25.626027 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.626036 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:25.626045 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:25.626056 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:25.666222 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:25.666258 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:25.703262 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:25.703294 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:25.778362 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:25.778400 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:25.796483 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:25.796514 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:25.866710 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:28.467522 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 10:45:28.467546 2005062 machine.go:97] duration metric: took 6.424061152s to provisionDockerMachine
	I1124 10:45:28.467557 2005062 start.go:293] postStartSetup for "pause-245240" (driver="docker")
	I1124 10:45:28.467568 2005062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 10:45:28.467649 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 10:45:28.467697 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.498225 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.614312 2005062 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 10:45:28.619751 2005062 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 10:45:28.619783 2005062 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 10:45:28.619794 2005062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 10:45:28.619849 2005062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 10:45:28.619942 2005062 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 10:45:28.620054 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 10:45:28.630392 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:45:28.650433 2005062 start.go:296] duration metric: took 182.861021ms for postStartSetup
	I1124 10:45:28.650524 2005062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:45:28.650579 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.672667 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.786493 2005062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 10:45:28.796443 2005062 fix.go:56] duration metric: took 6.773932644s for fixHost
	I1124 10:45:28.796470 2005062 start.go:83] releasing machines lock for "pause-245240", held for 6.773989088s
	I1124 10:45:28.796550 2005062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-245240
	I1124 10:45:28.818717 2005062 ssh_runner.go:195] Run: cat /version.json
	I1124 10:45:28.818770 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.819028 2005062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 10:45:28.819085 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.854565 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.863309 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.980536 2005062 ssh_runner.go:195] Run: systemctl --version
	I1124 10:45:29.073377 2005062 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 10:45:29.115934 2005062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 10:45:29.120467 2005062 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 10:45:29.120567 2005062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 10:45:29.129264 2005062 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 10:45:29.129291 2005062 start.go:496] detecting cgroup driver to use...
	I1124 10:45:29.129350 2005062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 10:45:29.129422 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 10:45:29.145323 2005062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 10:45:29.158208 2005062 docker.go:218] disabling cri-docker service (if available) ...
	I1124 10:45:29.158304 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 10:45:29.173973 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 10:45:29.187184 2005062 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 10:45:29.342609 2005062 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 10:45:29.471568 2005062 docker.go:234] disabling docker service ...
	I1124 10:45:29.471784 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 10:45:29.487278 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 10:45:29.501242 2005062 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 10:45:29.639199 2005062 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 10:45:29.797668 2005062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 10:45:29.811105 2005062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 10:45:29.827569 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:29.990363 2005062 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 10:45:29.990451 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.034201 2005062 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 10:45:30.034281 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.071590 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.088171 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.099788 2005062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 10:45:30.110389 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.122149 2005062 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.131955 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.142217 2005062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 10:45:30.151182 2005062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 10:45:30.159664 2005062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:45:30.292787 2005062 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 10:45:30.503439 2005062 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 10:45:30.503513 2005062 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 10:45:30.507322 2005062 start.go:564] Will wait 60s for crictl version
	I1124 10:45:30.507383 2005062 ssh_runner.go:195] Run: which crictl
	I1124 10:45:30.510949 2005062 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 10:45:30.537378 2005062 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 10:45:30.537464 2005062 ssh_runner.go:195] Run: crio --version
	I1124 10:45:30.567177 2005062 ssh_runner.go:195] Run: crio --version
	I1124 10:45:30.598256 2005062 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 10:45:30.601184 2005062 cli_runner.go:164] Run: docker network inspect pause-245240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 10:45:30.616754 2005062 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 10:45:30.620777 2005062 kubeadm.go:884] updating cluster {Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 10:45:30.620999 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:30.779367 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:30.937611 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.088794 2005062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 10:45:31.088953 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.246804 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.406614 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.568802 2005062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 10:45:31.613763 2005062 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 10:45:31.613793 2005062 crio.go:433] Images already preloaded, skipping extraction
	I1124 10:45:31.613859 2005062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 10:45:31.646524 2005062 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 10:45:31.646546 2005062 cache_images.go:86] Images are preloaded, skipping loading
	I1124 10:45:31.646553 2005062 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1124 10:45:31.646645 2005062 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-245240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 10:45:31.646720 2005062 ssh_runner.go:195] Run: crio config
	I1124 10:45:31.719187 2005062 cni.go:84] Creating CNI manager for ""
	I1124 10:45:31.719213 2005062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:45:31.719236 2005062 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 10:45:31.719267 2005062 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-245240 NodeName:pause-245240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 10:45:31.719425 2005062 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-245240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 10:45:31.719502 2005062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 10:45:31.729998 2005062 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 10:45:31.730116 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 10:45:31.738994 2005062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1124 10:45:31.753039 2005062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 10:45:31.769685 2005062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 10:45:31.786737 2005062 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 10:45:31.791198 2005062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:45:28.366962 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:28.385684 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:28.385762 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:28.415652 1986432 cri.go:89] found id: ""
	I1124 10:45:28.415677 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.415687 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:28.415693 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:28.415759 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:28.452408 1986432 cri.go:89] found id: ""
	I1124 10:45:28.452431 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.452440 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:28.452447 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:28.452503 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:28.515827 1986432 cri.go:89] found id: ""
	I1124 10:45:28.515849 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.515857 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:28.515864 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:28.515922 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:28.549890 1986432 cri.go:89] found id: ""
	I1124 10:45:28.549918 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.549927 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:28.549934 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:28.549994 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:28.584109 1986432 cri.go:89] found id: ""
	I1124 10:45:28.584131 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.584139 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:28.584146 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:28.584207 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:28.624082 1986432 cri.go:89] found id: ""
	I1124 10:45:28.624104 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.624113 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:28.624120 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:28.624178 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:28.659882 1986432 cri.go:89] found id: ""
	I1124 10:45:28.659904 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.659913 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:28.659920 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:28.659980 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:28.700857 1986432 cri.go:89] found id: ""
	I1124 10:45:28.700879 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.700889 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:28.700898 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:28.700910 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:28.775188 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:28.775272 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:28.795604 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:28.795630 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:28.888919 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:28.888937 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:28.888949 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:28.945578 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:28.945658 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:31.477317 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:31.488832 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:31.488906 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:31.526049 1986432 cri.go:89] found id: ""
	I1124 10:45:31.526077 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.526087 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:31.526094 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:31.526152 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:31.552114 1986432 cri.go:89] found id: ""
	I1124 10:45:31.552139 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.552148 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:31.552154 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:31.552215 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:31.590559 1986432 cri.go:89] found id: ""
	I1124 10:45:31.590586 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.590596 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:31.590603 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:31.590663 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:31.623429 1986432 cri.go:89] found id: ""
	I1124 10:45:31.623456 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.623466 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:31.623473 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:31.623535 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:31.657007 1986432 cri.go:89] found id: ""
	I1124 10:45:31.657031 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.657040 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:31.657047 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:31.657135 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:31.685936 1986432 cri.go:89] found id: ""
	I1124 10:45:31.685961 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.685970 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:31.685977 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:31.686036 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:31.724393 1986432 cri.go:89] found id: ""
	I1124 10:45:31.724420 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.724429 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:31.724436 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:31.724493 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:31.760530 1986432 cri.go:89] found id: ""
	I1124 10:45:31.766988 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.767073 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:31.767880 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:31.767896 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:31.811494 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:31.811526 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:31.967979 2005062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 10:45:31.982120 2005062 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240 for IP: 192.168.76.2
	I1124 10:45:31.982140 2005062 certs.go:195] generating shared ca certs ...
	I1124 10:45:31.982155 2005062 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:45:31.982285 2005062 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 10:45:31.982324 2005062 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 10:45:31.982331 2005062 certs.go:257] generating profile certs ...
	I1124 10:45:31.982411 2005062 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.key
	I1124 10:45:31.982474 2005062 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/apiserver.key.c46533c1
	I1124 10:45:31.982515 2005062 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/proxy-client.key
	I1124 10:45:31.982616 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 10:45:31.982646 2005062 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 10:45:31.982654 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 10:45:31.982681 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 10:45:31.982704 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 10:45:31.982727 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 10:45:31.982769 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:45:31.983402 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 10:45:32.020027 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 10:45:32.073011 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 10:45:32.099976 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 10:45:32.135504 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 10:45:32.172404 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 10:45:32.261946 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 10:45:32.322982 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 10:45:32.363159 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 10:45:32.394343 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 10:45:32.434471 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 10:45:32.464213 2005062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 10:45:32.483882 2005062 ssh_runner.go:195] Run: openssl version
	I1124 10:45:32.496325 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 10:45:32.508438 2005062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:45:32.512710 2005062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:45:32.512818 2005062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:45:32.565782 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 10:45:32.579649 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 10:45:32.592191 2005062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 10:45:32.596320 2005062 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 10:45:32.596397 2005062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 10:45:32.640312 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 10:45:32.651594 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 10:45:32.661456 2005062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 10:45:32.666333 2005062 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 10:45:32.666419 2005062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 10:45:32.709335 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 10:45:32.718038 2005062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 10:45:32.722192 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 10:45:32.769840 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 10:45:32.814762 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 10:45:32.860756 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 10:45:32.909125 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 10:45:33.016794 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 10:45:33.106421 2005062 kubeadm.go:401] StartCluster: {Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:45:33.106549 2005062 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 10:45:33.106629 2005062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 10:45:33.162902 2005062 cri.go:89] found id: "7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775"
	I1124 10:45:33.162933 2005062 cri.go:89] found id: "97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23"
	I1124 10:45:33.162939 2005062 cri.go:89] found id: "993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82"
	I1124 10:45:33.162942 2005062 cri.go:89] found id: "632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0"
	I1124 10:45:33.162945 2005062 cri.go:89] found id: "25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d"
	I1124 10:45:33.162949 2005062 cri.go:89] found id: "4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36"
	I1124 10:45:33.162954 2005062 cri.go:89] found id: "7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81"
	I1124 10:45:33.162957 2005062 cri.go:89] found id: "c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937"
	I1124 10:45:33.162960 2005062 cri.go:89] found id: "4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f"
	I1124 10:45:33.162968 2005062 cri.go:89] found id: "e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c"
	I1124 10:45:33.162971 2005062 cri.go:89] found id: "a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	I1124 10:45:33.162977 2005062 cri.go:89] found id: "c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b"
	I1124 10:45:33.162982 2005062 cri.go:89] found id: "dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6"
	I1124 10:45:33.162985 2005062 cri.go:89] found id: "4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	I1124 10:45:33.162996 2005062 cri.go:89] found id: ""
	I1124 10:45:33.163053 2005062 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 10:45:33.185929 2005062 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T10:45:33Z" level=error msg="open /run/runc: no such file or directory"
	I1124 10:45:33.186013 2005062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 10:45:33.195068 2005062 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 10:45:33.195096 2005062 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 10:45:33.195148 2005062 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 10:45:33.206129 2005062 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 10:45:33.206845 2005062 kubeconfig.go:125] found "pause-245240" server: "https://192.168.76.2:8443"
	I1124 10:45:33.207745 2005062 kapi.go:59] client config for pause-245240: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 10:45:33.208402 2005062 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 10:45:33.208423 2005062 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 10:45:33.208438 2005062 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 10:45:33.208447 2005062 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 10:45:33.208452 2005062 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 10:45:33.208851 2005062 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 10:45:33.218157 2005062 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 10:45:33.218199 2005062 kubeadm.go:602] duration metric: took 23.096537ms to restartPrimaryControlPlane
	I1124 10:45:33.218209 2005062 kubeadm.go:403] duration metric: took 111.79993ms to StartCluster
	I1124 10:45:33.218223 2005062 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:45:33.218298 2005062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:45:33.219209 2005062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:45:33.219445 2005062 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 10:45:33.219795 2005062 config.go:182] Loaded profile config "pause-245240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:45:33.219840 2005062 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 10:45:33.224429 2005062 out.go:179] * Verifying Kubernetes components...
	I1124 10:45:33.226393 2005062 out.go:179] * Enabled addons: 
	I1124 10:45:33.228302 2005062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:45:33.230498 2005062 addons.go:530] duration metric: took 10.653148ms for enable addons: enabled=[]
	I1124 10:45:33.444503 2005062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 10:45:33.457854 2005062 node_ready.go:35] waiting up to 6m0s for node "pause-245240" to be "Ready" ...
	I1124 10:45:31.903608 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:31.903649 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:31.921908 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:31.921948 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:31.999244 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:31.999279 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:31.999293 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:34.558990 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:34.570638 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:34.570725 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:34.608415 1986432 cri.go:89] found id: ""
	I1124 10:45:34.608444 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.608454 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:34.608460 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:34.608522 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:34.655752 1986432 cri.go:89] found id: ""
	I1124 10:45:34.655780 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.655789 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:34.655796 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:34.655866 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:34.707801 1986432 cri.go:89] found id: ""
	I1124 10:45:34.707829 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.707838 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:34.707845 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:34.707903 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:34.755972 1986432 cri.go:89] found id: ""
	I1124 10:45:34.755998 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.756009 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:34.756027 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:34.756101 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:34.803767 1986432 cri.go:89] found id: ""
	I1124 10:45:34.803796 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.803804 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:34.803812 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:34.803881 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:34.851006 1986432 cri.go:89] found id: ""
	I1124 10:45:34.851034 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.851043 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:34.851052 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:34.851115 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:34.904329 1986432 cri.go:89] found id: ""
	I1124 10:45:34.904358 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.904367 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:34.904374 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:34.904435 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:34.951467 1986432 cri.go:89] found id: ""
	I1124 10:45:34.951494 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.951503 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:34.951512 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:34.951524 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:35.042914 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:35.042954 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:35.063633 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:35.063669 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:35.167271 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:35.167295 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:35.167310 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:35.253298 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:35.253341 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:37.201304 2005062 node_ready.go:49] node "pause-245240" is "Ready"
	I1124 10:45:37.201337 2005062 node_ready.go:38] duration metric: took 3.743449649s for node "pause-245240" to be "Ready" ...
	I1124 10:45:37.201351 2005062 api_server.go:52] waiting for apiserver process to appear ...
	I1124 10:45:37.201411 2005062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:37.219067 2005062 api_server.go:72] duration metric: took 3.999579094s to wait for apiserver process to appear ...
	I1124 10:45:37.219096 2005062 api_server.go:88] waiting for apiserver healthz status ...
	I1124 10:45:37.219115 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:37.300221 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 10:45:37.300264 2005062 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 10:45:37.719814 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:37.728940 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 10:45:37.729023 2005062 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 10:45:38.219258 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:38.239206 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 10:45:38.239231 2005062 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 10:45:38.719919 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:38.728033 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 10:45:38.729134 2005062 api_server.go:141] control plane version: v1.34.2
	I1124 10:45:38.729157 2005062 api_server.go:131] duration metric: took 1.510053521s to wait for apiserver health ...
	I1124 10:45:38.729166 2005062 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 10:45:38.732624 2005062 system_pods.go:59] 7 kube-system pods found
	I1124 10:45:38.732660 2005062 system_pods.go:61] "coredns-66bc5c9577-xbq8z" [d9af75b1-2d5c-4114-b82d-eaaa86add98e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 10:45:38.732670 2005062 system_pods.go:61] "etcd-pause-245240" [6b4970fd-dccd-4f98-b975-c2a582df094e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 10:45:38.732675 2005062 system_pods.go:61] "kindnet-sq8vx" [396e6ff1-b0f2-4848-8adb-5c3752c2eb23] Running
	I1124 10:45:38.732681 2005062 system_pods.go:61] "kube-apiserver-pause-245240" [3452c20d-ea2a-45ca-97aa-12d7bd034ffb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 10:45:38.732688 2005062 system_pods.go:61] "kube-controller-manager-pause-245240" [5c5d8109-0a79-46d8-b72f-008d50494bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 10:45:38.732698 2005062 system_pods.go:61] "kube-proxy-vsqz2" [1c11f67f-7449-4aac-83be-3dd80c495669] Running
	I1124 10:45:38.732704 2005062 system_pods.go:61] "kube-scheduler-pause-245240" [fe8b31cd-6a39-48de-9a3b-640d1d84c753] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 10:45:38.732711 2005062 system_pods.go:74] duration metric: took 3.540095ms to wait for pod list to return data ...
	I1124 10:45:38.732719 2005062 default_sa.go:34] waiting for default service account to be created ...
	I1124 10:45:38.735467 2005062 default_sa.go:45] found service account: "default"
	I1124 10:45:38.735495 2005062 default_sa.go:55] duration metric: took 2.760628ms for default service account to be created ...
	I1124 10:45:38.735507 2005062 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 10:45:38.738438 2005062 system_pods.go:86] 7 kube-system pods found
	I1124 10:45:38.738475 2005062 system_pods.go:89] "coredns-66bc5c9577-xbq8z" [d9af75b1-2d5c-4114-b82d-eaaa86add98e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 10:45:38.738487 2005062 system_pods.go:89] "etcd-pause-245240" [6b4970fd-dccd-4f98-b975-c2a582df094e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 10:45:38.738493 2005062 system_pods.go:89] "kindnet-sq8vx" [396e6ff1-b0f2-4848-8adb-5c3752c2eb23] Running
	I1124 10:45:38.738499 2005062 system_pods.go:89] "kube-apiserver-pause-245240" [3452c20d-ea2a-45ca-97aa-12d7bd034ffb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 10:45:38.738506 2005062 system_pods.go:89] "kube-controller-manager-pause-245240" [5c5d8109-0a79-46d8-b72f-008d50494bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 10:45:38.738515 2005062 system_pods.go:89] "kube-proxy-vsqz2" [1c11f67f-7449-4aac-83be-3dd80c495669] Running
	I1124 10:45:38.738522 2005062 system_pods.go:89] "kube-scheduler-pause-245240" [fe8b31cd-6a39-48de-9a3b-640d1d84c753] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 10:45:38.738533 2005062 system_pods.go:126] duration metric: took 3.020464ms to wait for k8s-apps to be running ...
	I1124 10:45:38.738541 2005062 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 10:45:38.738602 2005062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:45:38.751581 2005062 system_svc.go:56] duration metric: took 13.028111ms WaitForService to wait for kubelet
	I1124 10:45:38.751653 2005062 kubeadm.go:587] duration metric: took 5.532168928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 10:45:38.751678 2005062 node_conditions.go:102] verifying NodePressure condition ...
	I1124 10:45:38.754552 2005062 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 10:45:38.754588 2005062 node_conditions.go:123] node cpu capacity is 2
	I1124 10:45:38.754603 2005062 node_conditions.go:105] duration metric: took 2.918727ms to run NodePressure ...
	I1124 10:45:38.754616 2005062 start.go:242] waiting for startup goroutines ...
	I1124 10:45:38.754623 2005062 start.go:247] waiting for cluster config update ...
	I1124 10:45:38.754631 2005062 start.go:256] writing updated cluster config ...
	I1124 10:45:38.754939 2005062 ssh_runner.go:195] Run: rm -f paused
	I1124 10:45:38.758429 2005062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 10:45:38.759041 2005062 kapi.go:59] client config for pause-245240: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 10:45:38.761895 2005062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xbq8z" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 10:45:40.767344 2005062 pod_ready.go:104] pod "coredns-66bc5c9577-xbq8z" is not "Ready", error: <nil>
	I1124 10:45:37.835094 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:37.848220 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:37.848315 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:37.888724 1986432 cri.go:89] found id: ""
	I1124 10:45:37.888756 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.888766 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:37.888773 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:37.888836 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:37.924370 1986432 cri.go:89] found id: ""
	I1124 10:45:37.924396 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.924405 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:37.924412 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:37.924477 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:37.974590 1986432 cri.go:89] found id: ""
	I1124 10:45:37.974626 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.974636 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:37.974643 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:37.974723 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:38.015170 1986432 cri.go:89] found id: ""
	I1124 10:45:38.015209 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.015220 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:38.015236 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:38.015330 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:38.074440 1986432 cri.go:89] found id: ""
	I1124 10:45:38.074499 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.074512 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:38.074533 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:38.074618 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:38.130684 1986432 cri.go:89] found id: ""
	I1124 10:45:38.130720 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.130731 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:38.130738 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:38.130812 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:38.170942 1986432 cri.go:89] found id: ""
	I1124 10:45:38.170983 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.170994 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:38.171001 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:38.171073 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:38.213467 1986432 cri.go:89] found id: ""
	I1124 10:45:38.213508 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.213518 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:38.213533 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:38.213563 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:38.363953 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:38.363977 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:38.363989 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:38.405041 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:38.405077 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:38.435192 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:38.435222 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:38.509425 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:38.509466 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:41.028226 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:41.038492 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:41.038559 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:41.066358 1986432 cri.go:89] found id: ""
	I1124 10:45:41.066381 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.066390 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:41.066397 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:41.066455 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:41.095869 1986432 cri.go:89] found id: ""
	I1124 10:45:41.095892 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.095901 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:41.095908 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:41.095965 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:41.124298 1986432 cri.go:89] found id: ""
	I1124 10:45:41.124321 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.124330 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:41.124336 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:41.124394 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:41.150773 1986432 cri.go:89] found id: ""
	I1124 10:45:41.150799 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.150807 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:41.150815 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:41.150876 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:41.177034 1986432 cri.go:89] found id: ""
	I1124 10:45:41.177057 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.177066 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:41.177072 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:41.177190 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:41.214595 1986432 cri.go:89] found id: ""
	I1124 10:45:41.214626 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.214635 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:41.214642 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:41.214700 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:41.246230 1986432 cri.go:89] found id: ""
	I1124 10:45:41.246255 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.246264 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:41.246271 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:41.246338 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:41.283456 1986432 cri.go:89] found id: ""
	I1124 10:45:41.283481 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.283490 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:41.283499 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:41.283511 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:41.358438 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:41.358475 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:41.379384 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:41.379415 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:41.445291 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:41.445364 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:41.445407 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:41.488247 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:41.488291 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:45:42.767505 2005062 pod_ready.go:104] pod "coredns-66bc5c9577-xbq8z" is not "Ready", error: <nil>
	I1124 10:45:44.270409 2005062 pod_ready.go:94] pod "coredns-66bc5c9577-xbq8z" is "Ready"
	I1124 10:45:44.270433 2005062 pod_ready.go:86] duration metric: took 5.508512947s for pod "coredns-66bc5c9577-xbq8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:44.273052 2005062 pod_ready.go:83] waiting for pod "etcd-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:46.278546 2005062 pod_ready.go:94] pod "etcd-pause-245240" is "Ready"
	I1124 10:45:46.278574 2005062 pod_ready.go:86] duration metric: took 2.00550358s for pod "etcd-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:46.280767 2005062 pod_ready.go:83] waiting for pod "kube-apiserver-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:44.023834 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:44.034548 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:44.034620 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:44.064681 1986432 cri.go:89] found id: ""
	I1124 10:45:44.064705 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.064714 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:44.064721 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:44.064781 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:44.100184 1986432 cri.go:89] found id: ""
	I1124 10:45:44.100207 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.100217 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:44.100224 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:44.100281 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:44.142287 1986432 cri.go:89] found id: ""
	I1124 10:45:44.142314 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.142327 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:44.142334 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:44.142393 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:44.181305 1986432 cri.go:89] found id: ""
	I1124 10:45:44.181333 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.181342 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:44.181349 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:44.181430 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:44.218457 1986432 cri.go:89] found id: ""
	I1124 10:45:44.218483 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.218502 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:44.218509 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:44.218581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:44.251490 1986432 cri.go:89] found id: ""
	I1124 10:45:44.251517 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.251526 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:44.251532 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:44.251596 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:44.295854 1986432 cri.go:89] found id: ""
	I1124 10:45:44.295881 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.295890 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:44.295897 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:44.295962 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:44.326463 1986432 cri.go:89] found id: ""
	I1124 10:45:44.326484 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.326492 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:44.326501 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:44.326513 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:44.413633 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:44.413657 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:44.413670 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:44.455548 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:44.455583 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:44.484839 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:44.484875 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:44.559243 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:44.559282 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:47.786145 2005062 pod_ready.go:94] pod "kube-apiserver-pause-245240" is "Ready"
	I1124 10:45:47.786178 2005062 pod_ready.go:86] duration metric: took 1.505384287s for pod "kube-apiserver-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.788401 2005062 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.793319 2005062 pod_ready.go:94] pod "kube-controller-manager-pause-245240" is "Ready"
	I1124 10:45:47.793351 2005062 pod_ready.go:86] duration metric: took 4.926694ms for pod "kube-controller-manager-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.795621 2005062 pod_ready.go:83] waiting for pod "kube-proxy-vsqz2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.865683 2005062 pod_ready.go:94] pod "kube-proxy-vsqz2" is "Ready"
	I1124 10:45:47.865711 2005062 pod_ready.go:86] duration metric: took 70.063719ms for pod "kube-proxy-vsqz2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:48.065961 2005062 pod_ready.go:83] waiting for pod "kube-scheduler-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 10:45:50.073440 2005062 pod_ready.go:104] pod "kube-scheduler-pause-245240" is not "Ready", error: <nil>
	I1124 10:45:51.571138 2005062 pod_ready.go:94] pod "kube-scheduler-pause-245240" is "Ready"
	I1124 10:45:51.571170 2005062 pod_ready.go:86] duration metric: took 3.505179392s for pod "kube-scheduler-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:51.571184 2005062 pod_ready.go:40] duration metric: took 12.812720948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 10:45:51.628882 2005062 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1124 10:45:51.632073 2005062 out.go:179] * Done! kubectl is now configured to use "pause-245240" cluster and "default" namespace by default
	I1124 10:45:47.077797 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:47.088211 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:47.088277 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:47.118728 1986432 cri.go:89] found id: ""
	I1124 10:45:47.118750 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.118760 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:47.118767 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:47.118825 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:47.150483 1986432 cri.go:89] found id: ""
	I1124 10:45:47.150507 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.150516 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:47.150523 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:47.150581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:47.181735 1986432 cri.go:89] found id: ""
	I1124 10:45:47.181758 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.181767 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:47.181774 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:47.181833 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:47.214333 1986432 cri.go:89] found id: ""
	I1124 10:45:47.214356 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.214365 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:47.214371 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:47.214432 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:47.244171 1986432 cri.go:89] found id: ""
	I1124 10:45:47.244250 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.244273 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:47.244296 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:47.244395 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:47.275473 1986432 cri.go:89] found id: ""
	I1124 10:45:47.275494 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.275503 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:47.275510 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:47.275568 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:47.309070 1986432 cri.go:89] found id: ""
	I1124 10:45:47.309092 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.309181 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:47.309191 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:47.309250 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:47.336153 1986432 cri.go:89] found id: ""
	I1124 10:45:47.336174 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.336183 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:47.336193 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:47.336204 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:47.406220 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:47.406257 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:47.424789 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:47.424817 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:47.493299 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:47.493323 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:47.493339 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:47.537076 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:47.537117 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:50.067104 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:50.078283 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:50.078363 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:50.109047 1986432 cri.go:89] found id: ""
	I1124 10:45:50.109074 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.109083 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:50.109090 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:50.109175 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:50.137023 1986432 cri.go:89] found id: ""
	I1124 10:45:50.137046 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.137054 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:50.137060 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:50.137146 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:50.168293 1986432 cri.go:89] found id: ""
	I1124 10:45:50.168316 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.168333 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:50.168340 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:50.168402 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:50.196807 1986432 cri.go:89] found id: ""
	I1124 10:45:50.196831 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.196840 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:50.196847 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:50.196918 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:50.235910 1986432 cri.go:89] found id: ""
	I1124 10:45:50.235932 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.235941 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:50.235947 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:50.236012 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:50.272648 1986432 cri.go:89] found id: ""
	I1124 10:45:50.272671 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.272681 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:50.272688 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:50.272750 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:50.306519 1986432 cri.go:89] found id: ""
	I1124 10:45:50.306542 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.306550 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:50.306556 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:50.306621 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:50.337670 1986432 cri.go:89] found id: ""
	I1124 10:45:50.337692 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.337700 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:50.337710 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:50.337721 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:50.408914 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:50.408955 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:50.427976 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:50.428169 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:50.503359 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:50.503430 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:50.503458 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:50.544309 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:50.544354 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.308586027Z" level=info msg="Started container" PID=2230 containerID=4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36 description=kube-system/kube-proxy-vsqz2/kube-proxy id=359f4cff-9d1d-46d6-841e-fa39c99b3e98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bef0c472188bf523459a7471c4cf5a4cc3478a1718cfcde68a99865dd35c7bc
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.326034988Z" level=info msg="Created container 97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23: kube-system/kube-controller-manager-pause-245240/kube-controller-manager" id=1ef968d4-a36f-4b93-b5dd-bdd692b95cea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.326385943Z" level=info msg="Created container 993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82: kube-system/kube-apiserver-pause-245240/kube-apiserver" id=34bdcc70-d219-4d3b-8535-2a7f4c1e7d7a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.327155687Z" level=info msg="Starting container: 97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23" id=06ff96bd-9841-4ae8-b0ab-c60d702a153b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.327350217Z" level=info msg="Starting container: 993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82" id=66871a28-b009-458a-b8cf-b188ba95fe22 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.330754803Z" level=info msg="Created container 7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775: kube-system/kube-scheduler-pause-245240/kube-scheduler" id=e60331d7-f55a-4b1e-904c-45aab2b12367 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.331724501Z" level=info msg="Starting container: 7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775" id=6a05ff30-919f-4052-b199-162a63c03631 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.34181645Z" level=info msg="Started container" PID=2265 containerID=97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23 description=kube-system/kube-controller-manager-pause-245240/kube-controller-manager id=06ff96bd-9841-4ae8-b0ab-c60d702a153b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f94c7acd315ed8935158803928299b01ae5a5a1b35ee5a126926a13082bf326b
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.342582592Z" level=info msg="Started container" PID=2288 containerID=7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775 description=kube-system/kube-scheduler-pause-245240/kube-scheduler id=6a05ff30-919f-4052-b199-162a63c03631 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1f1f2eb95066576d0ce693736adf524f93d5fe260936cd024ad492f9a5627e6
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.346280417Z" level=info msg="Started container" PID=2267 containerID=993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82 description=kube-system/kube-apiserver-pause-245240/kube-apiserver id=66871a28-b009-458a-b8cf-b188ba95fe22 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dab5c05b7de15319aebb39cd62a62202d5905b64000fc5a3ffbfe28f672e0839
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.565334287Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.568912708Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.568947326Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.568969538Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.571868572Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.571904109Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.571925213Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.575047234Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.575083755Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.57510458Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.578432085Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.578467474Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.578490219Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.581687261Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.581721641Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	7a603655c81fe       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   23 seconds ago       Running             kube-scheduler            1                   f1f1f2eb95066       kube-scheduler-pause-245240            kube-system
	97aee88b9e3dc       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   23 seconds ago       Running             kube-controller-manager   1                   f94c7acd315ed       kube-controller-manager-pause-245240   kube-system
	993bc385c7eab       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   23 seconds ago       Running             kube-apiserver            1                   dab5c05b7de15       kube-apiserver-pause-245240            kube-system
	632f540de17ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   cb7a659b3c4b5       kindnet-sq8vx                          kube-system
	25758a2f97099       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   9ac711ec07cbe       coredns-66bc5c9577-xbq8z               kube-system
	4692783a7119a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   23 seconds ago       Running             kube-proxy                1                   2bef0c472188b       kube-proxy-vsqz2                       kube-system
	7d04b12f6f11a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   23 seconds ago       Running             etcd                      1                   73042cdc2e4ad       etcd-pause-245240                      kube-system
	c3454e38c0091       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   9ac711ec07cbe       coredns-66bc5c9577-xbq8z               kube-system
	4828bd44aea43       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   cb7a659b3c4b5       kindnet-sq8vx                          kube-system
	e86c30e4e2616       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   2bef0c472188b       kube-proxy-vsqz2                       kube-system
	a19dba52bf31e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   f94c7acd315ed       kube-controller-manager-pause-245240   kube-system
	c5435f90e719a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   73042cdc2e4ad       etcd-pause-245240                      kube-system
	dcfbc31b64e74       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   dab5c05b7de15       kube-apiserver-pause-245240            kube-system
	4b3239d2756fd       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   f1f1f2eb95066       kube-scheduler-pause-245240            kube-system
	
	
	==> coredns [25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49277 - 7162 "HINFO IN 5982923945174560406.3285165719839043450. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011612449s
	
	
	==> coredns [c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50923 - 50337 "HINFO IN 2255500921777626642.6403281054394144409. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0152116s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-245240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-245240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=pause-245240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T10_44_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 10:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-245240
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 10:45:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:44:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:44:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:44:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:45:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-245240
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1ead4f7d-870c-4125-bc37-6f030fad8409
	  Boot ID:                    27a92f9c-55a4-4798-92be-317cdb891088
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xbq8z                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-245240                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-sq8vx                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-245240             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-245240    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-vsqz2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-245240             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-245240 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-245240 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node pause-245240 status is now: NodeHasSufficientPID
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-245240 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-245240 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-245240 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s                node-controller  Node pause-245240 event: Registered Node pause-245240 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-245240 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-245240 event: Registered Node pause-245240 in Controller
	
	
	==> dmesg <==
	[ +29.372278] overlayfs: idmapped layers are currently not supported
	[Nov24 10:17] overlayfs: idmapped layers are currently not supported
	[Nov24 10:18] overlayfs: idmapped layers are currently not supported
	[  +3.899881] overlayfs: idmapped layers are currently not supported
	[Nov24 10:19] overlayfs: idmapped layers are currently not supported
	[ +41.367824] overlayfs: idmapped layers are currently not supported
	[Nov24 10:21] overlayfs: idmapped layers are currently not supported
	[Nov24 10:26] overlayfs: idmapped layers are currently not supported
	[ +33.890897] overlayfs: idmapped layers are currently not supported
	[Nov24 10:28] overlayfs: idmapped layers are currently not supported
	[Nov24 10:29] overlayfs: idmapped layers are currently not supported
	[Nov24 10:30] overlayfs: idmapped layers are currently not supported
	[Nov24 10:32] overlayfs: idmapped layers are currently not supported
	[ +26.643756] overlayfs: idmapped layers are currently not supported
	[  +9.285653] overlayfs: idmapped layers are currently not supported
	[Nov24 10:33] overlayfs: idmapped layers are currently not supported
	[ +18.325038] overlayfs: idmapped layers are currently not supported
	[Nov24 10:34] overlayfs: idmapped layers are currently not supported
	[Nov24 10:35] overlayfs: idmapped layers are currently not supported
	[Nov24 10:36] overlayfs: idmapped layers are currently not supported
	[Nov24 10:37] overlayfs: idmapped layers are currently not supported
	[Nov24 10:39] overlayfs: idmapped layers are currently not supported
	[Nov24 10:41] overlayfs: idmapped layers are currently not supported
	[ +25.006505] overlayfs: idmapped layers are currently not supported
	[Nov24 10:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81] <==
	{"level":"warn","ts":"2025-11-24T10:45:35.455002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.474098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.541521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.542339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.555755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.593943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.614141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.634291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.662585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.687872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.702001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.715545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.746733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.753179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.773718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.790130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.811094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.830376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.841599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.862388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.892306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.913355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.933874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.949978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:36.083623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33862","server-name":"","error":"EOF"}
	
	
	==> etcd [c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b] <==
	{"level":"warn","ts":"2025-11-24T10:44:28.255197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.279293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.315701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.331110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.367626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.373851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.467412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51552","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T10:45:23.252255Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T10:45:23.252312Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-245240","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-24T10:45:23.252438Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T10:45:23.396857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T10:45:23.398371Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398421Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-24T10:45:23.398457Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398469Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398557Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398588Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T10:45:23.398599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-24T10:45:23.398562Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T10:45:23.398627Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T10:45:23.398644Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T10:45:23.401989Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-24T10:45:23.402068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T10:45:23.402095Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T10:45:23.402109Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-245240","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 10:45:55 up  9:28,  0 user,  load average: 3.74, 3.01, 2.29
	Linux pause-245240 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f] <==
	I1124 10:44:39.010741       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 10:44:39.010990       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 10:44:39.011117       1 main.go:148] setting mtu 1500 for CNI 
	I1124 10:44:39.011127       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 10:44:39.011139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T10:44:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 10:44:39.207333       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 10:44:39.207350       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 10:44:39.207359       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 10:44:39.207621       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 10:45:09.207100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 10:45:09.208318       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 10:45:09.208434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 10:45:09.208568       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 10:45:10.607515       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 10:45:10.607557       1 metrics.go:72] Registering metrics
	I1124 10:45:10.607626       1 controller.go:711] "Syncing nftables rules"
	I1124 10:45:19.213186       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 10:45:19.213304       1 main.go:301] handling current node
	
	
	==> kindnet [632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0] <==
	I1124 10:45:32.360070       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 10:45:32.360435       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 10:45:32.360558       1 main.go:148] setting mtu 1500 for CNI 
	I1124 10:45:32.360571       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 10:45:32.360583       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T10:45:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 10:45:32.563208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 10:45:32.563225       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 10:45:32.563233       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 10:45:32.563518       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 10:45:37.365198       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 10:45:37.365249       1 metrics.go:72] Registering metrics
	I1124 10:45:37.365336       1 controller.go:711] "Syncing nftables rules"
	I1124 10:45:42.564952       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 10:45:42.565019       1 main.go:301] handling current node
	I1124 10:45:52.562793       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 10:45:52.562848       1 main.go:301] handling current node
	
	
	==> kube-apiserver [993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82] <==
	I1124 10:45:37.230373       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 10:45:37.241434       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 10:45:37.241531       1 policy_source.go:240] refreshing policies
	I1124 10:45:37.262622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 10:45:37.269546       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 10:45:37.273768       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 10:45:37.274223       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 10:45:37.314015       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 10:45:37.314079       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 10:45:37.323888       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 10:45:37.324262       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 10:45:37.325062       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 10:45:37.325251       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 10:45:37.325317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 10:45:37.325640       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 10:45:37.325690       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 10:45:37.325721       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 10:45:37.337706       1 cache.go:39] Caches are synced for autoregister controller
	E1124 10:45:37.354194       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 10:45:37.971842       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 10:45:39.216528       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 10:45:40.590059       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 10:45:40.732678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 10:45:40.930760       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 10:45:40.979995       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6] <==
	W1124 10:45:23.275405       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.275516       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.275574       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278361       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278440       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278487       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278531       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278576       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278617       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278659       1 logging.go:55] [core] [Channel #25 SubChannel #27]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278698       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278738       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278779       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278824       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278864       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278910       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278947       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278991       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279027       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279071       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279108       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279144       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279388       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279552       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.280174       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23] <==
	I1124 10:45:40.577803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 10:45:40.577913       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-245240"
	I1124 10:45:40.577979       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 10:45:40.578428       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 10:45:40.578605       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 10:45:40.581235       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 10:45:40.581281       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 10:45:40.585170       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 10:45:40.586636       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 10:45:40.586732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:45:40.587928       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 10:45:40.597455       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:45:40.600616       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 10:45:40.602797       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 10:45:40.605084       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 10:45:40.608486       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 10:45:40.609719       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 10:45:40.623763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 10:45:40.623854       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 10:45:40.623869       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 10:45:40.624726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 10:45:40.626752       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 10:45:40.637946       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 10:45:40.640293       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 10:45:40.645617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164] <==
	I1124 10:44:37.376840       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 10:44:37.376942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 10:44:37.378078       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 10:44:37.384051       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 10:44:37.384459       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 10:44:37.393927       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 10:44:37.394255       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 10:44:37.394340       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 10:44:37.400088       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 10:44:37.400855       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:44:37.391262       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 10:44:37.402640       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 10:44:37.402751       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 10:44:37.402806       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 10:44:37.412379       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:44:37.385170       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 10:44:37.415457       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 10:44:37.440686       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 10:44:37.454625       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 10:44:37.458029       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-245240" podCIDRs=["10.244.0.0/24"]
	I1124 10:44:37.459626       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 10:44:37.460602       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 10:44:37.460649       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 10:44:37.459661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 10:45:22.330995       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36] <==
	I1124 10:45:35.900849       1 server_linux.go:53] "Using iptables proxy"
	I1124 10:45:36.978141       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 10:45:37.390115       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 10:45:37.390336       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 10:45:37.390487       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 10:45:37.606255       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 10:45:37.606371       1 server_linux.go:132] "Using iptables Proxier"
	I1124 10:45:37.621203       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 10:45:37.621623       1 server.go:527] "Version info" version="v1.34.2"
	I1124 10:45:37.623332       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:45:37.624701       1 config.go:200] "Starting service config controller"
	I1124 10:45:37.624758       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 10:45:37.624803       1 config.go:106] "Starting endpoint slice config controller"
	I1124 10:45:37.624857       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 10:45:37.624897       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 10:45:37.624943       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 10:45:37.628936       1 config.go:309] "Starting node config controller"
	I1124 10:45:37.629021       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 10:45:37.629052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 10:45:37.725781       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 10:45:37.729197       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 10:45:37.729233       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c] <==
	I1124 10:44:39.010681       1 server_linux.go:53] "Using iptables proxy"
	I1124 10:44:39.107084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 10:44:39.207994       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 10:44:39.208034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 10:44:39.208100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 10:44:39.275608       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 10:44:39.275737       1 server_linux.go:132] "Using iptables Proxier"
	I1124 10:44:39.279610       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 10:44:39.280078       1 server.go:527] "Version info" version="v1.34.2"
	I1124 10:44:39.281169       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:44:39.282499       1 config.go:200] "Starting service config controller"
	I1124 10:44:39.282546       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 10:44:39.282565       1 config.go:106] "Starting endpoint slice config controller"
	I1124 10:44:39.282569       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 10:44:39.282580       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 10:44:39.282583       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 10:44:39.283291       1 config.go:309] "Starting node config controller"
	I1124 10:44:39.283339       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 10:44:39.283368       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 10:44:39.382983       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 10:44:39.383030       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 10:44:39.382982       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0] <==
	I1124 10:44:29.322133       1 serving.go:386] Generated self-signed cert in-memory
	I1124 10:44:31.891727       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 10:44:31.891760       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:44:31.896471       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 10:44:31.896518       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 10:44:31.896554       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:44:31.896562       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:44:31.896575       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:44:31.896589       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:44:31.897009       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 10:44:31.909480       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 10:44:31.996698       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 10:44:31.996650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:44:31.996848       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:23.247853       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 10:45:23.247887       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 10:45:23.247906       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 10:45:23.247931       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:23.247960       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:45:23.247978       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1124 10:45:23.248253       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 10:45:23.248293       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775] <==
	I1124 10:45:36.538992       1 serving.go:386] Generated self-signed cert in-memory
	I1124 10:45:38.072443       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 10:45:38.072587       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:45:38.083491       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 10:45:38.083668       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 10:45:38.083758       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 10:45:38.083829       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 10:45:38.093350       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:45:38.093674       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:45:38.093728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:38.093770       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:38.184513       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 10:45:38.194272       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:38.195393       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.061434    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsqz2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c11f67f-7449-4aac-83be-3dd80c495669" pod="kube-system/kube-proxy-vsqz2"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.061629    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-sq8vx\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="396e6ff1-b0f2-4848-8adb-5c3752c2eb23" pod="kube-system/kindnet-sq8vx"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.062263    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xbq8z\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d9af75b1-2d5c-4114-b82d-eaaa86add98e" pod="kube-system/coredns-66bc5c9577-xbq8z"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: I1124 10:45:32.066850    1300 scope.go:117] "RemoveContainer" containerID="4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067331    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d1e0521098c6a05af3ffd81f3a6f83e" pod="kube-system/kube-scheduler-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067507    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f3c88e2d69300286a68b4bae07303b03" pod="kube-system/etcd-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067667    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5016a6264a8c350870f6cea806c9c026" pod="kube-system/kube-apiserver-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067824    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsqz2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c11f67f-7449-4aac-83be-3dd80c495669" pod="kube-system/kube-proxy-vsqz2"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067980    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-sq8vx\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="396e6ff1-b0f2-4848-8adb-5c3752c2eb23" pod="kube-system/kindnet-sq8vx"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.068139    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xbq8z\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d9af75b1-2d5c-4114-b82d-eaaa86add98e" pod="kube-system/coredns-66bc5c9577-xbq8z"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: I1124 10:45:32.070347    1300 scope.go:117] "RemoveContainer" containerID="a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.070785    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsqz2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c11f67f-7449-4aac-83be-3dd80c495669" pod="kube-system/kube-proxy-vsqz2"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.070958    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-sq8vx\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="396e6ff1-b0f2-4848-8adb-5c3752c2eb23" pod="kube-system/kindnet-sq8vx"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071112    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xbq8z\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d9af75b1-2d5c-4114-b82d-eaaa86add98e" pod="kube-system/coredns-66bc5c9577-xbq8z"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071268    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c3ffec50de57a0792b9e7ed063c8ccc5" pod="kube-system/kube-controller-manager-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071424    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d1e0521098c6a05af3ffd81f3a6f83e" pod="kube-system/kube-scheduler-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071599    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f3c88e2d69300286a68b4bae07303b03" pod="kube-system/etcd-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071761    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5016a6264a8c350870f6cea806c9c026" pod="kube-system/kube-apiserver-pause-245240"
	Nov 24 10:45:37 pause-245240 kubelet[1300]: E1124 10:45:37.143418    1300 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-245240\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-245240' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 24 10:45:37 pause-245240 kubelet[1300]: E1124 10:45:37.144772    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-245240\" is forbidden: User \"system:node:pause-245240\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-245240' and this object" podUID="c3ffec50de57a0792b9e7ed063c8ccc5" pod="kube-system/kube-controller-manager-pause-245240"
	Nov 24 10:45:37 pause-245240 kubelet[1300]: E1124 10:45:37.186596    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-245240\" is forbidden: User \"system:node:pause-245240\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-245240' and this object" podUID="9d1e0521098c6a05af3ffd81f3a6f83e" pod="kube-system/kube-scheduler-pause-245240"
	Nov 24 10:45:43 pause-245240 kubelet[1300]: W1124 10:45:43.051144    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 10:45:52 pause-245240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 10:45:52 pause-245240 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 10:45:52 pause-245240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-245240 -n pause-245240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-245240 -n pause-245240: exit status 2 (494.418594ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-245240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-245240
helpers_test.go:243: (dbg) docker inspect pause-245240:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791",
	        "Created": "2025-11-24T10:44:02.923568892Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2000191,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T10:44:02.980828291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/hostname",
	        "HostsPath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/hosts",
	        "LogPath": "/var/lib/docker/containers/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791/c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791-json.log",
	        "Name": "/pause-245240",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-245240:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-245240",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c8d2d9b65149c33f4434568fd5032b4dc8aadfcdeceac0fcff4f1e3d42d51791",
	                "LowerDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c-init/diff:/var/lib/docker/overlay2/ef19988a245ba97ffdc4be8afaf890b17cf1a7bae9c730ea3428ce44cdfe3a16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f92a97d19202f53377d7086545c0c8b4b33bd651ab3446e179396a178366c7c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-245240",
	                "Source": "/var/lib/docker/volumes/pause-245240/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-245240",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-245240",
	                "name.minikube.sigs.k8s.io": "pause-245240",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed68fbed3a59dd9a9047448d39889069f51553da461f0060f3e243b6d81f2705",
	            "SandboxKey": "/var/run/docker/netns/ed68fbed3a59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35251"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-245240": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:5a:b3:fe:90:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a758a0ee07b7ee5113db29e8c714def93fe09d2ec0934b199559745b56e483cb",
	                    "EndpointID": "4845dd1c20c4088333d71456d5a155a6ad8670ce6512ff1eaf7c2a73a1428d82",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-245240",
	                        "c8d2d9b65149"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-245240 -n pause-245240
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-245240 -n pause-245240: exit status 2 (380.150113ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-245240 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-245240 logs -n 25: (1.433733924s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-538948 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:39 UTC │ 24 Nov 25 10:39 UTC │
	│ start   │ -p missing-upgrade-114074 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-114074    │ jenkins │ v1.32.0 │ 24 Nov 25 10:39 UTC │ 24 Nov 25 10:40 UTC │
	│ start   │ -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:39 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p missing-upgrade-114074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-114074    │ jenkins │ v1.37.0 │ 24 Nov 25 10:40 UTC │ 24 Nov 25 10:41 UTC │
	│ delete  │ -p missing-upgrade-114074                                                                                                                       │ missing-upgrade-114074    │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-306449 │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ delete  │ -p NoKubernetes-538948                                                                                                                          │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ ssh     │ -p NoKubernetes-538948 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │                     │
	│ stop    │ -p NoKubernetes-538948                                                                                                                          │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p NoKubernetes-538948 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ stop    │ -p kubernetes-upgrade-306449                                                                                                                    │ kubernetes-upgrade-306449 │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p kubernetes-upgrade-306449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-306449 │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p NoKubernetes-538948 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │                     │
	│ delete  │ -p NoKubernetes-538948                                                                                                                          │ NoKubernetes-538948       │ jenkins │ v1.37.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:41 UTC │
	│ start   │ -p stopped-upgrade-661807 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-661807    │ jenkins │ v1.32.0 │ 24 Nov 25 10:41 UTC │ 24 Nov 25 10:42 UTC │
	│ stop    │ stopped-upgrade-661807 stop                                                                                                                     │ stopped-upgrade-661807    │ jenkins │ v1.32.0 │ 24 Nov 25 10:42 UTC │ 24 Nov 25 10:42 UTC │
	│ start   │ -p stopped-upgrade-661807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-661807    │ jenkins │ v1.37.0 │ 24 Nov 25 10:42 UTC │ 24 Nov 25 10:42 UTC │
	│ delete  │ -p stopped-upgrade-661807                                                                                                                       │ stopped-upgrade-661807    │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ start   │ -p running-upgrade-832076 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-832076    │ jenkins │ v1.32.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ start   │ -p running-upgrade-832076 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-832076    │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ delete  │ -p running-upgrade-832076                                                                                                                       │ running-upgrade-832076    │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:43 UTC │
	│ start   │ -p pause-245240 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-245240              │ jenkins │ v1.37.0 │ 24 Nov 25 10:43 UTC │ 24 Nov 25 10:45 UTC │
	│ start   │ -p pause-245240 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-245240              │ jenkins │ v1.37.0 │ 24 Nov 25 10:45 UTC │ 24 Nov 25 10:45 UTC │
	│ pause   │ -p pause-245240 --alsologtostderr -v=5                                                                                                          │ pause-245240              │ jenkins │ v1.37.0 │ 24 Nov 25 10:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 10:45:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 10:45:21.794210 2005062 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:45:21.794510 2005062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:45:21.794540 2005062 out.go:374] Setting ErrFile to fd 2...
	I1124 10:45:21.794561 2005062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:45:21.794853 2005062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:45:21.795290 2005062 out.go:368] Setting JSON to false
	I1124 10:45:21.796420 2005062 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":34072,"bootTime":1763947050,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:45:21.796535 2005062 start.go:143] virtualization:  
	I1124 10:45:21.801466 2005062 out.go:179] * [pause-245240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 10:45:21.804644 2005062 notify.go:221] Checking for updates...
	I1124 10:45:21.807680 2005062 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:45:21.811049 2005062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:45:21.813939 2005062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:45:21.816913 2005062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:45:21.819770 2005062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:45:21.828575 2005062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:45:19.178462 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:19.188993 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:19.189059 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:19.230211 1986432 cri.go:89] found id: ""
	I1124 10:45:19.230233 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.230242 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:19.230249 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:19.230308 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:19.258132 1986432 cri.go:89] found id: ""
	I1124 10:45:19.258155 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.258176 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:19.258185 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:19.258246 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:19.285507 1986432 cri.go:89] found id: ""
	I1124 10:45:19.285531 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.285539 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:19.285547 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:19.285614 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:19.313743 1986432 cri.go:89] found id: ""
	I1124 10:45:19.313765 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.313774 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:19.313781 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:19.313841 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:19.341218 1986432 cri.go:89] found id: ""
	I1124 10:45:19.341248 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.341257 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:19.341265 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:19.341325 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:19.367950 1986432 cri.go:89] found id: ""
	I1124 10:45:19.367976 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.367985 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:19.367992 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:19.368053 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:19.394601 1986432 cri.go:89] found id: ""
	I1124 10:45:19.394627 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.394637 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:19.394644 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:19.394708 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:19.434228 1986432 cri.go:89] found id: ""
	I1124 10:45:19.434251 1986432 logs.go:282] 0 containers: []
	W1124 10:45:19.434260 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:19.434268 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:19.434280 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:19.493533 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:19.493559 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:19.577029 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:19.577071 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:19.595336 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:19.595370 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:19.665756 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:19.665779 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:19.665792 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:21.832167 2005062 config.go:182] Loaded profile config "pause-245240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:45:21.832778 2005062 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:45:21.862321 2005062 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:45:21.862438 2005062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:45:21.923234 2005062 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 10:45:21.913718497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:45:21.923388 2005062 docker.go:319] overlay module found
	I1124 10:45:21.926626 2005062 out.go:179] * Using the docker driver based on existing profile
	I1124 10:45:21.929602 2005062 start.go:309] selected driver: docker
	I1124 10:45:21.929625 2005062 start.go:927] validating driver "docker" against &{Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:45:21.929762 2005062 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:45:21.929869 2005062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:45:21.987497 2005062 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 10:45:21.977997109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:45:21.987903 2005062 cni.go:84] Creating CNI manager for ""
	I1124 10:45:21.987968 2005062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:45:21.988016 2005062 start.go:353] cluster config:
	{Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:45:21.991777 2005062 out.go:179] * Starting "pause-245240" primary control-plane node in "pause-245240" cluster
	I1124 10:45:21.994600 2005062 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 10:45:21.997511 2005062 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 10:45:22.001579 2005062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 10:45:22.001646 2005062 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1124 10:45:22.001659 2005062 cache.go:65] Caching tarball of preloaded images
	I1124 10:45:22.001703 2005062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 10:45:22.001785 2005062 preload.go:238] Found /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 10:45:22.001797 2005062 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1124 10:45:22.001955 2005062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/config.json ...
	I1124 10:45:22.022301 2005062 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 10:45:22.022326 2005062 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 10:45:22.022349 2005062 cache.go:243] Successfully downloaded all kic artifacts
	I1124 10:45:22.022395 2005062 start.go:360] acquireMachinesLock for pause-245240: {Name:mk98785f26338538e367a8dedc2aa1790321bc09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 10:45:22.022469 2005062 start.go:364] duration metric: took 52.752µs to acquireMachinesLock for "pause-245240"
	I1124 10:45:22.022494 2005062 start.go:96] Skipping create...Using existing machine configuration
	I1124 10:45:22.022506 2005062 fix.go:54] fixHost starting: 
	I1124 10:45:22.022792 2005062 cli_runner.go:164] Run: docker container inspect pause-245240 --format={{.State.Status}}
	I1124 10:45:22.040184 2005062 fix.go:112] recreateIfNeeded on pause-245240: state=Running err=<nil>
	W1124 10:45:22.040221 2005062 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 10:45:22.043440 2005062 out.go:252] * Updating the running docker "pause-245240" container ...
	I1124 10:45:22.043478 2005062 machine.go:94] provisionDockerMachine start ...
	I1124 10:45:22.043559 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.060516 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:22.060857 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:22.060872 2005062 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 10:45:22.213063 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-245240
	
	I1124 10:45:22.213130 2005062 ubuntu.go:182] provisioning hostname "pause-245240"
	I1124 10:45:22.213214 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.233628 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:22.233930 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:22.233940 2005062 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-245240 && echo "pause-245240" | sudo tee /etc/hostname
	I1124 10:45:22.408929 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-245240
	
	I1124 10:45:22.409060 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.430798 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:22.431112 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:22.431131 2005062 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-245240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-245240/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-245240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 10:45:22.601595 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 10:45:22.601638 2005062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-1804834/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-1804834/.minikube}
	I1124 10:45:22.601710 2005062 ubuntu.go:190] setting up certificates
	I1124 10:45:22.601721 2005062 provision.go:84] configureAuth start
	I1124 10:45:22.601809 2005062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-245240
	I1124 10:45:22.635362 2005062 provision.go:143] copyHostCerts
	I1124 10:45:22.635440 2005062 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem, removing ...
	I1124 10:45:22.635462 2005062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem
	I1124 10:45:22.635547 2005062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.pem (1078 bytes)
	I1124 10:45:22.635662 2005062 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem, removing ...
	I1124 10:45:22.635671 2005062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem
	I1124 10:45:22.635700 2005062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/cert.pem (1123 bytes)
	I1124 10:45:22.635806 2005062 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem, removing ...
	I1124 10:45:22.635819 2005062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem
	I1124 10:45:22.635852 2005062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-1804834/.minikube/key.pem (1675 bytes)
	I1124 10:45:22.635912 2005062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem org=jenkins.pause-245240 san=[127.0.0.1 192.168.76.2 localhost minikube pause-245240]
	I1124 10:45:22.863758 2005062 provision.go:177] copyRemoteCerts
	I1124 10:45:22.863865 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 10:45:22.863928 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:22.884235 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:22.993562 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 10:45:23.013428 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 10:45:23.031677 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 10:45:23.050418 2005062 provision.go:87] duration metric: took 448.658841ms to configureAuth
	I1124 10:45:23.050448 2005062 ubuntu.go:206] setting minikube options for container-runtime
	I1124 10:45:23.050727 2005062 config.go:182] Loaded profile config "pause-245240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:45:23.050875 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:23.069143 2005062 main.go:143] libmachine: Using SSH client type: native
	I1124 10:45:23.069500 2005062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35250 <nil> <nil>}
	I1124 10:45:23.069525 2005062 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 10:45:22.216483 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:22.235923 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:22.235992 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:22.275278 1986432 cri.go:89] found id: ""
	I1124 10:45:22.275317 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.275330 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:22.275349 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:22.275425 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:22.321491 1986432 cri.go:89] found id: ""
	I1124 10:45:22.321518 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.321529 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:22.321536 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:22.321612 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:22.363465 1986432 cri.go:89] found id: ""
	I1124 10:45:22.363490 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.363499 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:22.363506 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:22.363568 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:22.399782 1986432 cri.go:89] found id: ""
	I1124 10:45:22.399808 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.399818 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:22.399825 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:22.399885 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:22.457990 1986432 cri.go:89] found id: ""
	I1124 10:45:22.458017 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.458025 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:22.458032 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:22.458092 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:22.517733 1986432 cri.go:89] found id: ""
	I1124 10:45:22.517759 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.517768 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:22.517775 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:22.517837 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:22.560877 1986432 cri.go:89] found id: ""
	I1124 10:45:22.560902 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.560911 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:22.560917 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:22.560974 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:22.592310 1986432 cri.go:89] found id: ""
	I1124 10:45:22.592344 1986432 logs.go:282] 0 containers: []
	W1124 10:45:22.592353 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:22.592362 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:22.592373 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:22.676382 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:22.676457 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:22.699655 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:22.699681 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:22.784335 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:22.784353 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:22.784365 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:22.840500 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:22.840577 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:25.377619 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:25.388294 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:25.388365 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:25.414360 1986432 cri.go:89] found id: ""
	I1124 10:45:25.414381 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.414390 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:25.414397 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:25.414454 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:25.441377 1986432 cri.go:89] found id: ""
	I1124 10:45:25.441403 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.441413 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:25.441420 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:25.441488 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:25.479479 1986432 cri.go:89] found id: ""
	I1124 10:45:25.479507 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.479516 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:25.479523 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:25.479581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:25.510312 1986432 cri.go:89] found id: ""
	I1124 10:45:25.510341 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.510349 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:25.510357 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:25.510416 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:25.541150 1986432 cri.go:89] found id: ""
	I1124 10:45:25.541175 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.541185 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:25.541192 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:25.541251 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:25.571686 1986432 cri.go:89] found id: ""
	I1124 10:45:25.571714 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.571723 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:25.571730 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:25.571790 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:25.598880 1986432 cri.go:89] found id: ""
	I1124 10:45:25.598901 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.598910 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:25.598917 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:25.598974 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:25.626005 1986432 cri.go:89] found id: ""
	I1124 10:45:25.626027 1986432 logs.go:282] 0 containers: []
	W1124 10:45:25.626036 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:25.626045 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:25.626056 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:25.666222 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:25.666258 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:25.703262 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:25.703294 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:25.778362 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:25.778400 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:25.796483 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:25.796514 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:25.866710 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:28.467522 2005062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 10:45:28.467546 2005062 machine.go:97] duration metric: took 6.424061152s to provisionDockerMachine
	I1124 10:45:28.467557 2005062 start.go:293] postStartSetup for "pause-245240" (driver="docker")
	I1124 10:45:28.467568 2005062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 10:45:28.467649 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 10:45:28.467697 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.498225 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.614312 2005062 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 10:45:28.619751 2005062 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 10:45:28.619783 2005062 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 10:45:28.619794 2005062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/addons for local assets ...
	I1124 10:45:28.619849 2005062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-1804834/.minikube/files for local assets ...
	I1124 10:45:28.619942 2005062 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem -> 18067042.pem in /etc/ssl/certs
	I1124 10:45:28.620054 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 10:45:28.630392 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:45:28.650433 2005062 start.go:296] duration metric: took 182.861021ms for postStartSetup
	I1124 10:45:28.650524 2005062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:45:28.650579 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.672667 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.786493 2005062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 10:45:28.796443 2005062 fix.go:56] duration metric: took 6.773932644s for fixHost
	I1124 10:45:28.796470 2005062 start.go:83] releasing machines lock for "pause-245240", held for 6.773989088s
	I1124 10:45:28.796550 2005062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-245240
	I1124 10:45:28.818717 2005062 ssh_runner.go:195] Run: cat /version.json
	I1124 10:45:28.818770 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.819028 2005062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 10:45:28.819085 2005062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-245240
	I1124 10:45:28.854565 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.863309 2005062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35250 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/pause-245240/id_rsa Username:docker}
	I1124 10:45:28.980536 2005062 ssh_runner.go:195] Run: systemctl --version
	I1124 10:45:29.073377 2005062 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 10:45:29.115934 2005062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 10:45:29.120467 2005062 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 10:45:29.120567 2005062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 10:45:29.129264 2005062 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 10:45:29.129291 2005062 start.go:496] detecting cgroup driver to use...
	I1124 10:45:29.129350 2005062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 10:45:29.129422 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 10:45:29.145323 2005062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 10:45:29.158208 2005062 docker.go:218] disabling cri-docker service (if available) ...
	I1124 10:45:29.158304 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 10:45:29.173973 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 10:45:29.187184 2005062 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 10:45:29.342609 2005062 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 10:45:29.471568 2005062 docker.go:234] disabling docker service ...
	I1124 10:45:29.471784 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 10:45:29.487278 2005062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 10:45:29.501242 2005062 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 10:45:29.639199 2005062 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 10:45:29.797668 2005062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 10:45:29.811105 2005062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 10:45:29.827569 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:29.990363 2005062 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 10:45:29.990451 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.034201 2005062 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 10:45:30.034281 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.071590 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.088171 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.099788 2005062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 10:45:30.110389 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.122149 2005062 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.131955 2005062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 10:45:30.142217 2005062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 10:45:30.151182 2005062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 10:45:30.159664 2005062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:45:30.292787 2005062 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 10:45:30.503439 2005062 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 10:45:30.503513 2005062 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 10:45:30.507322 2005062 start.go:564] Will wait 60s for crictl version
	I1124 10:45:30.507383 2005062 ssh_runner.go:195] Run: which crictl
	I1124 10:45:30.510949 2005062 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 10:45:30.537378 2005062 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 10:45:30.537464 2005062 ssh_runner.go:195] Run: crio --version
	I1124 10:45:30.567177 2005062 ssh_runner.go:195] Run: crio --version
	I1124 10:45:30.598256 2005062 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1124 10:45:30.601184 2005062 cli_runner.go:164] Run: docker network inspect pause-245240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 10:45:30.616754 2005062 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 10:45:30.620777 2005062 kubeadm.go:884] updating cluster {Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 10:45:30.620999 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:30.779367 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:30.937611 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.088794 2005062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 10:45:31.088953 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.246804 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.406614 2005062 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubeadm.sha256
	I1124 10:45:31.568802 2005062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 10:45:31.613763 2005062 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 10:45:31.613793 2005062 crio.go:433] Images already preloaded, skipping extraction
	I1124 10:45:31.613859 2005062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 10:45:31.646524 2005062 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 10:45:31.646546 2005062 cache_images.go:86] Images are preloaded, skipping loading
	I1124 10:45:31.646553 2005062 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1124 10:45:31.646645 2005062 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-245240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 10:45:31.646720 2005062 ssh_runner.go:195] Run: crio config
	I1124 10:45:31.719187 2005062 cni.go:84] Creating CNI manager for ""
	I1124 10:45:31.719213 2005062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 10:45:31.719236 2005062 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 10:45:31.719267 2005062 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-245240 NodeName:pause-245240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 10:45:31.719425 2005062 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-245240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 10:45:31.719502 2005062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 10:45:31.729998 2005062 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 10:45:31.730116 2005062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 10:45:31.738994 2005062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1124 10:45:31.753039 2005062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 10:45:31.769685 2005062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 10:45:31.786737 2005062 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 10:45:31.791198 2005062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:45:28.366962 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:28.385684 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:28.385762 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:28.415652 1986432 cri.go:89] found id: ""
	I1124 10:45:28.415677 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.415687 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:28.415693 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:28.415759 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:28.452408 1986432 cri.go:89] found id: ""
	I1124 10:45:28.452431 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.452440 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:28.452447 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:28.452503 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:28.515827 1986432 cri.go:89] found id: ""
	I1124 10:45:28.515849 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.515857 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:28.515864 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:28.515922 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:28.549890 1986432 cri.go:89] found id: ""
	I1124 10:45:28.549918 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.549927 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:28.549934 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:28.549994 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:28.584109 1986432 cri.go:89] found id: ""
	I1124 10:45:28.584131 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.584139 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:28.584146 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:28.584207 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:28.624082 1986432 cri.go:89] found id: ""
	I1124 10:45:28.624104 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.624113 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:28.624120 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:28.624178 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:28.659882 1986432 cri.go:89] found id: ""
	I1124 10:45:28.659904 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.659913 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:28.659920 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:28.659980 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:28.700857 1986432 cri.go:89] found id: ""
	I1124 10:45:28.700879 1986432 logs.go:282] 0 containers: []
	W1124 10:45:28.700889 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:28.700898 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:28.700910 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:28.775188 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:28.775272 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:28.795604 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:28.795630 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:28.888919 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:28.888937 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:28.888949 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:28.945578 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:28.945658 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:31.477317 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:31.488832 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:31.488906 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:31.526049 1986432 cri.go:89] found id: ""
	I1124 10:45:31.526077 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.526087 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:31.526094 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:31.526152 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:31.552114 1986432 cri.go:89] found id: ""
	I1124 10:45:31.552139 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.552148 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:31.552154 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:31.552215 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:31.590559 1986432 cri.go:89] found id: ""
	I1124 10:45:31.590586 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.590596 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:31.590603 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:31.590663 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:31.623429 1986432 cri.go:89] found id: ""
	I1124 10:45:31.623456 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.623466 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:31.623473 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:31.623535 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:31.657007 1986432 cri.go:89] found id: ""
	I1124 10:45:31.657031 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.657040 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:31.657047 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:31.657135 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:31.685936 1986432 cri.go:89] found id: ""
	I1124 10:45:31.685961 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.685970 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:31.685977 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:31.686036 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:31.724393 1986432 cri.go:89] found id: ""
	I1124 10:45:31.724420 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.724429 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:31.724436 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:31.724493 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:31.760530 1986432 cri.go:89] found id: ""
	I1124 10:45:31.766988 1986432 logs.go:282] 0 containers: []
	W1124 10:45:31.767073 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:31.767880 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:31.767896 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:31.811494 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:31.811526 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:31.967979 2005062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 10:45:31.982120 2005062 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240 for IP: 192.168.76.2
	I1124 10:45:31.982140 2005062 certs.go:195] generating shared ca certs ...
	I1124 10:45:31.982155 2005062 certs.go:227] acquiring lock for ca certs: {Name:mk84be5bbc98b723e62c17d72c09edb89fa80dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:45:31.982285 2005062 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key
	I1124 10:45:31.982324 2005062 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key
	I1124 10:45:31.982331 2005062 certs.go:257] generating profile certs ...
	I1124 10:45:31.982411 2005062 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.key
	I1124 10:45:31.982474 2005062 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/apiserver.key.c46533c1
	I1124 10:45:31.982515 2005062 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/proxy-client.key
	I1124 10:45:31.982616 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem (1338 bytes)
	W1124 10:45:31.982646 2005062 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704_empty.pem, impossibly tiny 0 bytes
	I1124 10:45:31.982654 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 10:45:31.982681 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/ca.pem (1078 bytes)
	I1124 10:45:31.982704 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/cert.pem (1123 bytes)
	I1124 10:45:31.982727 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/key.pem (1675 bytes)
	I1124 10:45:31.982769 2005062 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem (1708 bytes)
	I1124 10:45:31.983402 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 10:45:32.020027 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 10:45:32.073011 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 10:45:32.099976 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 10:45:32.135504 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 10:45:32.172404 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 10:45:32.261946 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 10:45:32.322982 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 10:45:32.363159 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 10:45:32.394343 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/certs/1806704.pem --> /usr/share/ca-certificates/1806704.pem (1338 bytes)
	I1124 10:45:32.434471 2005062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/ssl/certs/18067042.pem --> /usr/share/ca-certificates/18067042.pem (1708 bytes)
	I1124 10:45:32.464213 2005062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 10:45:32.483882 2005062 ssh_runner.go:195] Run: openssl version
	I1124 10:45:32.496325 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 10:45:32.508438 2005062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:45:32.512710 2005062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:45:32.512818 2005062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 10:45:32.565782 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 10:45:32.579649 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806704.pem && ln -fs /usr/share/ca-certificates/1806704.pem /etc/ssl/certs/1806704.pem"
	I1124 10:45:32.592191 2005062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806704.pem
	I1124 10:45:32.596320 2005062 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 09:38 /usr/share/ca-certificates/1806704.pem
	I1124 10:45:32.596397 2005062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806704.pem
	I1124 10:45:32.640312 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806704.pem /etc/ssl/certs/51391683.0"
	I1124 10:45:32.651594 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18067042.pem && ln -fs /usr/share/ca-certificates/18067042.pem /etc/ssl/certs/18067042.pem"
	I1124 10:45:32.661456 2005062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18067042.pem
	I1124 10:45:32.666333 2005062 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 09:38 /usr/share/ca-certificates/18067042.pem
	I1124 10:45:32.666419 2005062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18067042.pem
	I1124 10:45:32.709335 2005062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18067042.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 10:45:32.718038 2005062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 10:45:32.722192 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 10:45:32.769840 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 10:45:32.814762 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 10:45:32.860756 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 10:45:32.909125 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 10:45:33.016794 2005062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 10:45:33.106421 2005062 kubeadm.go:401] StartCluster: {Name:pause-245240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-245240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:45:33.106549 2005062 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 10:45:33.106629 2005062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 10:45:33.162902 2005062 cri.go:89] found id: "7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775"
	I1124 10:45:33.162933 2005062 cri.go:89] found id: "97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23"
	I1124 10:45:33.162939 2005062 cri.go:89] found id: "993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82"
	I1124 10:45:33.162942 2005062 cri.go:89] found id: "632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0"
	I1124 10:45:33.162945 2005062 cri.go:89] found id: "25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d"
	I1124 10:45:33.162949 2005062 cri.go:89] found id: "4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36"
	I1124 10:45:33.162954 2005062 cri.go:89] found id: "7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81"
	I1124 10:45:33.162957 2005062 cri.go:89] found id: "c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937"
	I1124 10:45:33.162960 2005062 cri.go:89] found id: "4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f"
	I1124 10:45:33.162968 2005062 cri.go:89] found id: "e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c"
	I1124 10:45:33.162971 2005062 cri.go:89] found id: "a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	I1124 10:45:33.162977 2005062 cri.go:89] found id: "c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b"
	I1124 10:45:33.162982 2005062 cri.go:89] found id: "dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6"
	I1124 10:45:33.162985 2005062 cri.go:89] found id: "4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	I1124 10:45:33.162996 2005062 cri.go:89] found id: ""
	I1124 10:45:33.163053 2005062 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 10:45:33.185929 2005062 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T10:45:33Z" level=error msg="open /run/runc: no such file or directory"
	I1124 10:45:33.186013 2005062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 10:45:33.195068 2005062 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 10:45:33.195096 2005062 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 10:45:33.195148 2005062 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 10:45:33.206129 2005062 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 10:45:33.206845 2005062 kubeconfig.go:125] found "pause-245240" server: "https://192.168.76.2:8443"
	I1124 10:45:33.207745 2005062 kapi.go:59] client config for pause-245240: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 10:45:33.208402 2005062 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 10:45:33.208423 2005062 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 10:45:33.208438 2005062 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 10:45:33.208447 2005062 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 10:45:33.208452 2005062 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 10:45:33.208851 2005062 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 10:45:33.218157 2005062 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 10:45:33.218199 2005062 kubeadm.go:602] duration metric: took 23.096537ms to restartPrimaryControlPlane
	I1124 10:45:33.218209 2005062 kubeadm.go:403] duration metric: took 111.79993ms to StartCluster
	I1124 10:45:33.218223 2005062 settings.go:142] acquiring lock: {Name:mk21a1b5cbe666c76dae591663be9b2bdcd1d3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:45:33.218298 2005062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:45:33.219209 2005062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/kubeconfig: {Name:mkb195f88f54f76b9f5cd79098f43771cd68ef59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 10:45:33.219445 2005062 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 10:45:33.219795 2005062 config.go:182] Loaded profile config "pause-245240": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:45:33.219840 2005062 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 10:45:33.224429 2005062 out.go:179] * Verifying Kubernetes components...
	I1124 10:45:33.226393 2005062 out.go:179] * Enabled addons: 
	I1124 10:45:33.228302 2005062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 10:45:33.230498 2005062 addons.go:530] duration metric: took 10.653148ms for enable addons: enabled=[]
	I1124 10:45:33.444503 2005062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 10:45:33.457854 2005062 node_ready.go:35] waiting up to 6m0s for node "pause-245240" to be "Ready" ...
	I1124 10:45:31.903608 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:31.903649 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:31.921908 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:31.921948 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:31.999244 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:31.999279 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:31.999293 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:34.558990 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:34.570638 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:34.570725 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:34.608415 1986432 cri.go:89] found id: ""
	I1124 10:45:34.608444 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.608454 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:34.608460 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:34.608522 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:34.655752 1986432 cri.go:89] found id: ""
	I1124 10:45:34.655780 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.655789 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:34.655796 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:34.655866 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:34.707801 1986432 cri.go:89] found id: ""
	I1124 10:45:34.707829 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.707838 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:34.707845 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:34.707903 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:34.755972 1986432 cri.go:89] found id: ""
	I1124 10:45:34.755998 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.756009 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:34.756027 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:34.756101 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:34.803767 1986432 cri.go:89] found id: ""
	I1124 10:45:34.803796 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.803804 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:34.803812 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:34.803881 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:34.851006 1986432 cri.go:89] found id: ""
	I1124 10:45:34.851034 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.851043 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:34.851052 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:34.851115 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:34.904329 1986432 cri.go:89] found id: ""
	I1124 10:45:34.904358 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.904367 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:34.904374 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:34.904435 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:34.951467 1986432 cri.go:89] found id: ""
	I1124 10:45:34.951494 1986432 logs.go:282] 0 containers: []
	W1124 10:45:34.951503 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:34.951512 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:34.951524 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:35.042914 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:35.042954 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:35.063633 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:35.063669 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:35.167271 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:35.167295 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:35.167310 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:35.253298 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:35.253341 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:37.201304 2005062 node_ready.go:49] node "pause-245240" is "Ready"
	I1124 10:45:37.201337 2005062 node_ready.go:38] duration metric: took 3.743449649s for node "pause-245240" to be "Ready" ...
	I1124 10:45:37.201351 2005062 api_server.go:52] waiting for apiserver process to appear ...
	I1124 10:45:37.201411 2005062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:37.219067 2005062 api_server.go:72] duration metric: took 3.999579094s to wait for apiserver process to appear ...
	I1124 10:45:37.219096 2005062 api_server.go:88] waiting for apiserver healthz status ...
	I1124 10:45:37.219115 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:37.300221 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 10:45:37.300264 2005062 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 10:45:37.719814 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:37.728940 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 10:45:37.729023 2005062 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 10:45:38.219258 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:38.239206 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 10:45:38.239231 2005062 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 10:45:38.719919 2005062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 10:45:38.728033 2005062 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 10:45:38.729134 2005062 api_server.go:141] control plane version: v1.34.2
	I1124 10:45:38.729157 2005062 api_server.go:131] duration metric: took 1.510053521s to wait for apiserver health ...
	I1124 10:45:38.729166 2005062 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 10:45:38.732624 2005062 system_pods.go:59] 7 kube-system pods found
	I1124 10:45:38.732660 2005062 system_pods.go:61] "coredns-66bc5c9577-xbq8z" [d9af75b1-2d5c-4114-b82d-eaaa86add98e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 10:45:38.732670 2005062 system_pods.go:61] "etcd-pause-245240" [6b4970fd-dccd-4f98-b975-c2a582df094e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 10:45:38.732675 2005062 system_pods.go:61] "kindnet-sq8vx" [396e6ff1-b0f2-4848-8adb-5c3752c2eb23] Running
	I1124 10:45:38.732681 2005062 system_pods.go:61] "kube-apiserver-pause-245240" [3452c20d-ea2a-45ca-97aa-12d7bd034ffb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 10:45:38.732688 2005062 system_pods.go:61] "kube-controller-manager-pause-245240" [5c5d8109-0a79-46d8-b72f-008d50494bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 10:45:38.732698 2005062 system_pods.go:61] "kube-proxy-vsqz2" [1c11f67f-7449-4aac-83be-3dd80c495669] Running
	I1124 10:45:38.732704 2005062 system_pods.go:61] "kube-scheduler-pause-245240" [fe8b31cd-6a39-48de-9a3b-640d1d84c753] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 10:45:38.732711 2005062 system_pods.go:74] duration metric: took 3.540095ms to wait for pod list to return data ...
	I1124 10:45:38.732719 2005062 default_sa.go:34] waiting for default service account to be created ...
	I1124 10:45:38.735467 2005062 default_sa.go:45] found service account: "default"
	I1124 10:45:38.735495 2005062 default_sa.go:55] duration metric: took 2.760628ms for default service account to be created ...
	I1124 10:45:38.735507 2005062 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 10:45:38.738438 2005062 system_pods.go:86] 7 kube-system pods found
	I1124 10:45:38.738475 2005062 system_pods.go:89] "coredns-66bc5c9577-xbq8z" [d9af75b1-2d5c-4114-b82d-eaaa86add98e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 10:45:38.738487 2005062 system_pods.go:89] "etcd-pause-245240" [6b4970fd-dccd-4f98-b975-c2a582df094e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 10:45:38.738493 2005062 system_pods.go:89] "kindnet-sq8vx" [396e6ff1-b0f2-4848-8adb-5c3752c2eb23] Running
	I1124 10:45:38.738499 2005062 system_pods.go:89] "kube-apiserver-pause-245240" [3452c20d-ea2a-45ca-97aa-12d7bd034ffb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 10:45:38.738506 2005062 system_pods.go:89] "kube-controller-manager-pause-245240" [5c5d8109-0a79-46d8-b72f-008d50494bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 10:45:38.738515 2005062 system_pods.go:89] "kube-proxy-vsqz2" [1c11f67f-7449-4aac-83be-3dd80c495669] Running
	I1124 10:45:38.738522 2005062 system_pods.go:89] "kube-scheduler-pause-245240" [fe8b31cd-6a39-48de-9a3b-640d1d84c753] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 10:45:38.738533 2005062 system_pods.go:126] duration metric: took 3.020464ms to wait for k8s-apps to be running ...
	I1124 10:45:38.738541 2005062 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 10:45:38.738602 2005062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:45:38.751581 2005062 system_svc.go:56] duration metric: took 13.028111ms WaitForService to wait for kubelet
	I1124 10:45:38.751653 2005062 kubeadm.go:587] duration metric: took 5.532168928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 10:45:38.751678 2005062 node_conditions.go:102] verifying NodePressure condition ...
	I1124 10:45:38.754552 2005062 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 10:45:38.754588 2005062 node_conditions.go:123] node cpu capacity is 2
	I1124 10:45:38.754603 2005062 node_conditions.go:105] duration metric: took 2.918727ms to run NodePressure ...
	I1124 10:45:38.754616 2005062 start.go:242] waiting for startup goroutines ...
	I1124 10:45:38.754623 2005062 start.go:247] waiting for cluster config update ...
	I1124 10:45:38.754631 2005062 start.go:256] writing updated cluster config ...
	I1124 10:45:38.754939 2005062 ssh_runner.go:195] Run: rm -f paused
	I1124 10:45:38.758429 2005062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 10:45:38.759041 2005062 kapi.go:59] client config for pause-245240: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.crt", KeyFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/pause-245240/client.key", CAFile:"/home/jenkins/minikube-integration/21978-1804834/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 10:45:38.761895 2005062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xbq8z" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 10:45:40.767344 2005062 pod_ready.go:104] pod "coredns-66bc5c9577-xbq8z" is not "Ready", error: <nil>
	I1124 10:45:37.835094 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:37.848220 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:37.848315 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:37.888724 1986432 cri.go:89] found id: ""
	I1124 10:45:37.888756 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.888766 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:37.888773 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:37.888836 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:37.924370 1986432 cri.go:89] found id: ""
	I1124 10:45:37.924396 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.924405 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:37.924412 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:37.924477 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:37.974590 1986432 cri.go:89] found id: ""
	I1124 10:45:37.974626 1986432 logs.go:282] 0 containers: []
	W1124 10:45:37.974636 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:37.974643 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:37.974723 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:38.015170 1986432 cri.go:89] found id: ""
	I1124 10:45:38.015209 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.015220 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:38.015236 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:38.015330 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:38.074440 1986432 cri.go:89] found id: ""
	I1124 10:45:38.074499 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.074512 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:38.074533 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:38.074618 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:38.130684 1986432 cri.go:89] found id: ""
	I1124 10:45:38.130720 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.130731 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:38.130738 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:38.130812 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:38.170942 1986432 cri.go:89] found id: ""
	I1124 10:45:38.170983 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.170994 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:38.171001 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:38.171073 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:38.213467 1986432 cri.go:89] found id: ""
	I1124 10:45:38.213508 1986432 logs.go:282] 0 containers: []
	W1124 10:45:38.213518 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:38.213533 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:38.213563 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:38.363953 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:38.363977 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:38.363989 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:38.405041 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:38.405077 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:38.435192 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:38.435222 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:38.509425 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:38.509466 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:41.028226 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:41.038492 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:41.038559 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:41.066358 1986432 cri.go:89] found id: ""
	I1124 10:45:41.066381 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.066390 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:41.066397 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:41.066455 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:41.095869 1986432 cri.go:89] found id: ""
	I1124 10:45:41.095892 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.095901 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:41.095908 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:41.095965 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:41.124298 1986432 cri.go:89] found id: ""
	I1124 10:45:41.124321 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.124330 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:41.124336 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:41.124394 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:41.150773 1986432 cri.go:89] found id: ""
	I1124 10:45:41.150799 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.150807 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:41.150815 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:41.150876 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:41.177034 1986432 cri.go:89] found id: ""
	I1124 10:45:41.177057 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.177066 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:41.177072 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:41.177190 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:41.214595 1986432 cri.go:89] found id: ""
	I1124 10:45:41.214626 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.214635 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:41.214642 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:41.214700 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:41.246230 1986432 cri.go:89] found id: ""
	I1124 10:45:41.246255 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.246264 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:41.246271 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:41.246338 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:41.283456 1986432 cri.go:89] found id: ""
	I1124 10:45:41.283481 1986432 logs.go:282] 0 containers: []
	W1124 10:45:41.283490 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:41.283499 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:41.283511 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:41.358438 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:41.358475 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:41.379384 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:41.379415 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:41.445291 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:41.445364 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:41.445407 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:41.488247 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:41.488291 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1124 10:45:42.767505 2005062 pod_ready.go:104] pod "coredns-66bc5c9577-xbq8z" is not "Ready", error: <nil>
	I1124 10:45:44.270409 2005062 pod_ready.go:94] pod "coredns-66bc5c9577-xbq8z" is "Ready"
	I1124 10:45:44.270433 2005062 pod_ready.go:86] duration metric: took 5.508512947s for pod "coredns-66bc5c9577-xbq8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:44.273052 2005062 pod_ready.go:83] waiting for pod "etcd-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:46.278546 2005062 pod_ready.go:94] pod "etcd-pause-245240" is "Ready"
	I1124 10:45:46.278574 2005062 pod_ready.go:86] duration metric: took 2.00550358s for pod "etcd-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:46.280767 2005062 pod_ready.go:83] waiting for pod "kube-apiserver-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:44.023834 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:44.034548 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:44.034620 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:44.064681 1986432 cri.go:89] found id: ""
	I1124 10:45:44.064705 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.064714 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:44.064721 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:44.064781 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:44.100184 1986432 cri.go:89] found id: ""
	I1124 10:45:44.100207 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.100217 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:44.100224 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:44.100281 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:44.142287 1986432 cri.go:89] found id: ""
	I1124 10:45:44.142314 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.142327 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:44.142334 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:44.142393 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:44.181305 1986432 cri.go:89] found id: ""
	I1124 10:45:44.181333 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.181342 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:44.181349 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:44.181430 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:44.218457 1986432 cri.go:89] found id: ""
	I1124 10:45:44.218483 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.218502 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:44.218509 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:44.218581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:44.251490 1986432 cri.go:89] found id: ""
	I1124 10:45:44.251517 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.251526 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:44.251532 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:44.251596 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:44.295854 1986432 cri.go:89] found id: ""
	I1124 10:45:44.295881 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.295890 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:44.295897 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:44.295962 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:44.326463 1986432 cri.go:89] found id: ""
	I1124 10:45:44.326484 1986432 logs.go:282] 0 containers: []
	W1124 10:45:44.326492 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:44.326501 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:44.326513 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:44.413633 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:44.413657 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:44.413670 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:44.455548 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:44.455583 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:44.484839 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:44.484875 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:44.559243 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:44.559282 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:47.786145 2005062 pod_ready.go:94] pod "kube-apiserver-pause-245240" is "Ready"
	I1124 10:45:47.786178 2005062 pod_ready.go:86] duration metric: took 1.505384287s for pod "kube-apiserver-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.788401 2005062 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.793319 2005062 pod_ready.go:94] pod "kube-controller-manager-pause-245240" is "Ready"
	I1124 10:45:47.793351 2005062 pod_ready.go:86] duration metric: took 4.926694ms for pod "kube-controller-manager-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.795621 2005062 pod_ready.go:83] waiting for pod "kube-proxy-vsqz2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:47.865683 2005062 pod_ready.go:94] pod "kube-proxy-vsqz2" is "Ready"
	I1124 10:45:47.865711 2005062 pod_ready.go:86] duration metric: took 70.063719ms for pod "kube-proxy-vsqz2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:48.065961 2005062 pod_ready.go:83] waiting for pod "kube-scheduler-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 10:45:50.073440 2005062 pod_ready.go:104] pod "kube-scheduler-pause-245240" is not "Ready", error: <nil>
	I1124 10:45:51.571138 2005062 pod_ready.go:94] pod "kube-scheduler-pause-245240" is "Ready"
	I1124 10:45:51.571170 2005062 pod_ready.go:86] duration metric: took 3.505179392s for pod "kube-scheduler-pause-245240" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 10:45:51.571184 2005062 pod_ready.go:40] duration metric: took 12.812720948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 10:45:51.628882 2005062 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1124 10:45:51.632073 2005062 out.go:179] * Done! kubectl is now configured to use "pause-245240" cluster and "default" namespace by default
	I1124 10:45:47.077797 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:47.088211 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:47.088277 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:47.118728 1986432 cri.go:89] found id: ""
	I1124 10:45:47.118750 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.118760 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:47.118767 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:47.118825 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:47.150483 1986432 cri.go:89] found id: ""
	I1124 10:45:47.150507 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.150516 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:47.150523 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:47.150581 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:47.181735 1986432 cri.go:89] found id: ""
	I1124 10:45:47.181758 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.181767 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:47.181774 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:47.181833 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:47.214333 1986432 cri.go:89] found id: ""
	I1124 10:45:47.214356 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.214365 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:47.214371 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:47.214432 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:47.244171 1986432 cri.go:89] found id: ""
	I1124 10:45:47.244250 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.244273 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:47.244296 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:47.244395 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:47.275473 1986432 cri.go:89] found id: ""
	I1124 10:45:47.275494 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.275503 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:47.275510 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:47.275568 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:47.309070 1986432 cri.go:89] found id: ""
	I1124 10:45:47.309092 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.309181 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:47.309191 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:47.309250 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:47.336153 1986432 cri.go:89] found id: ""
	I1124 10:45:47.336174 1986432 logs.go:282] 0 containers: []
	W1124 10:45:47.336183 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:47.336193 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:47.336204 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:47.406220 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:47.406257 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:47.424789 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:47.424817 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:47.493299 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:47.493323 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:47.493339 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:47.537076 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:47.537117 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:50.067104 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:50.078283 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:50.078363 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:50.109047 1986432 cri.go:89] found id: ""
	I1124 10:45:50.109074 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.109083 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:50.109090 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:50.109175 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:50.137023 1986432 cri.go:89] found id: ""
	I1124 10:45:50.137046 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.137054 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:50.137060 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:50.137146 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:50.168293 1986432 cri.go:89] found id: ""
	I1124 10:45:50.168316 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.168333 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:50.168340 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:50.168402 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:50.196807 1986432 cri.go:89] found id: ""
	I1124 10:45:50.196831 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.196840 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:50.196847 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:50.196918 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:50.235910 1986432 cri.go:89] found id: ""
	I1124 10:45:50.235932 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.235941 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:50.235947 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:50.236012 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:50.272648 1986432 cri.go:89] found id: ""
	I1124 10:45:50.272671 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.272681 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:50.272688 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:50.272750 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:50.306519 1986432 cri.go:89] found id: ""
	I1124 10:45:50.306542 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.306550 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:50.306556 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:50.306621 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:50.337670 1986432 cri.go:89] found id: ""
	I1124 10:45:50.337692 1986432 logs.go:282] 0 containers: []
	W1124 10:45:50.337700 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:50.337710 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:50.337721 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:50.408914 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:50.408955 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:50.427976 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:50.428169 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:50.503359 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:50.503430 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:50.503458 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:50.544309 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:50.544354 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:53.085257 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:53.095705 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:53.095776 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:53.121867 1986432 cri.go:89] found id: ""
	I1124 10:45:53.121891 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.121900 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:53.121912 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:53.121975 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:53.149089 1986432 cri.go:89] found id: ""
	I1124 10:45:53.149151 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.149160 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:53.149167 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:53.149236 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:53.175431 1986432 cri.go:89] found id: ""
	I1124 10:45:53.175454 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.175462 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:53.175470 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:53.175528 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:53.203073 1986432 cri.go:89] found id: ""
	I1124 10:45:53.203100 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.203110 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:53.203117 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:53.203175 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:53.245799 1986432 cri.go:89] found id: ""
	I1124 10:45:53.245825 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.245833 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:53.245840 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:53.245906 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:53.276054 1986432 cri.go:89] found id: ""
	I1124 10:45:53.276075 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.276084 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:53.276090 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:53.276149 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:53.303453 1986432 cri.go:89] found id: ""
	I1124 10:45:53.303484 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.303493 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:53.303500 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:53.303556 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:53.335209 1986432 cri.go:89] found id: ""
	I1124 10:45:53.335231 1986432 logs.go:282] 0 containers: []
	W1124 10:45:53.335239 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:53.335248 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:53.335260 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:53.448690 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:53.448713 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:53.448726 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:53.500771 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:53.500811 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 10:45:53.541966 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:53.541996 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:53.627838 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:53.627878 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:56.145599 1986432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:45:56.157574 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 10:45:56.157643 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 10:45:56.196385 1986432 cri.go:89] found id: ""
	I1124 10:45:56.196408 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.196422 1986432 logs.go:284] No container was found matching "kube-apiserver"
	I1124 10:45:56.196429 1986432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 10:45:56.196489 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 10:45:56.252617 1986432 cri.go:89] found id: ""
	I1124 10:45:56.252640 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.252718 1986432 logs.go:284] No container was found matching "etcd"
	I1124 10:45:56.252730 1986432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 10:45:56.252799 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 10:45:56.298566 1986432 cri.go:89] found id: ""
	I1124 10:45:56.298587 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.298595 1986432 logs.go:284] No container was found matching "coredns"
	I1124 10:45:56.298601 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 10:45:56.298658 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 10:45:56.346392 1986432 cri.go:89] found id: ""
	I1124 10:45:56.346417 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.346426 1986432 logs.go:284] No container was found matching "kube-scheduler"
	I1124 10:45:56.346433 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 10:45:56.346495 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 10:45:56.390111 1986432 cri.go:89] found id: ""
	I1124 10:45:56.390137 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.390145 1986432 logs.go:284] No container was found matching "kube-proxy"
	I1124 10:45:56.390153 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 10:45:56.390218 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 10:45:56.423308 1986432 cri.go:89] found id: ""
	I1124 10:45:56.423336 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.423344 1986432 logs.go:284] No container was found matching "kube-controller-manager"
	I1124 10:45:56.423351 1986432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 10:45:56.423412 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 10:45:56.468963 1986432 cri.go:89] found id: ""
	I1124 10:45:56.468987 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.468995 1986432 logs.go:284] No container was found matching "kindnet"
	I1124 10:45:56.469002 1986432 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 10:45:56.469061 1986432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 10:45:56.510694 1986432 cri.go:89] found id: ""
	I1124 10:45:56.510719 1986432 logs.go:282] 0 containers: []
	W1124 10:45:56.510728 1986432 logs.go:284] No container was found matching "storage-provisioner"
	I1124 10:45:56.510737 1986432 logs.go:123] Gathering logs for kubelet ...
	I1124 10:45:56.510750 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 10:45:56.595485 1986432 logs.go:123] Gathering logs for dmesg ...
	I1124 10:45:56.595558 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 10:45:56.618105 1986432 logs.go:123] Gathering logs for describe nodes ...
	I1124 10:45:56.618131 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 10:45:56.703129 1986432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 10:45:56.703201 1986432 logs.go:123] Gathering logs for CRI-O ...
	I1124 10:45:56.703228 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 10:45:56.754363 1986432 logs.go:123] Gathering logs for container status ...
	I1124 10:45:56.754402 1986432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.308586027Z" level=info msg="Started container" PID=2230 containerID=4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36 description=kube-system/kube-proxy-vsqz2/kube-proxy id=359f4cff-9d1d-46d6-841e-fa39c99b3e98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bef0c472188bf523459a7471c4cf5a4cc3478a1718cfcde68a99865dd35c7bc
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.326034988Z" level=info msg="Created container 97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23: kube-system/kube-controller-manager-pause-245240/kube-controller-manager" id=1ef968d4-a36f-4b93-b5dd-bdd692b95cea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.326385943Z" level=info msg="Created container 993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82: kube-system/kube-apiserver-pause-245240/kube-apiserver" id=34bdcc70-d219-4d3b-8535-2a7f4c1e7d7a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.327155687Z" level=info msg="Starting container: 97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23" id=06ff96bd-9841-4ae8-b0ab-c60d702a153b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.327350217Z" level=info msg="Starting container: 993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82" id=66871a28-b009-458a-b8cf-b188ba95fe22 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.330754803Z" level=info msg="Created container 7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775: kube-system/kube-scheduler-pause-245240/kube-scheduler" id=e60331d7-f55a-4b1e-904c-45aab2b12367 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.331724501Z" level=info msg="Starting container: 7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775" id=6a05ff30-919f-4052-b199-162a63c03631 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.34181645Z" level=info msg="Started container" PID=2265 containerID=97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23 description=kube-system/kube-controller-manager-pause-245240/kube-controller-manager id=06ff96bd-9841-4ae8-b0ab-c60d702a153b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f94c7acd315ed8935158803928299b01ae5a5a1b35ee5a126926a13082bf326b
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.342582592Z" level=info msg="Started container" PID=2288 containerID=7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775 description=kube-system/kube-scheduler-pause-245240/kube-scheduler id=6a05ff30-919f-4052-b199-162a63c03631 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1f1f2eb95066576d0ce693736adf524f93d5fe260936cd024ad492f9a5627e6
	Nov 24 10:45:32 pause-245240 crio[2063]: time="2025-11-24T10:45:32.346280417Z" level=info msg="Started container" PID=2267 containerID=993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82 description=kube-system/kube-apiserver-pause-245240/kube-apiserver id=66871a28-b009-458a-b8cf-b188ba95fe22 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dab5c05b7de15319aebb39cd62a62202d5905b64000fc5a3ffbfe28f672e0839
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.565334287Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.568912708Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.568947326Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.568969538Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.571868572Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.571904109Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.571925213Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.575047234Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.575083755Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.57510458Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.578432085Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.578467474Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.578490219Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.581687261Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 10:45:42 pause-245240 crio[2063]: time="2025-11-24T10:45:42.581721641Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	7a603655c81fe       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   25 seconds ago       Running             kube-scheduler            1                   f1f1f2eb95066       kube-scheduler-pause-245240            kube-system
	97aee88b9e3dc       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   25 seconds ago       Running             kube-controller-manager   1                   f94c7acd315ed       kube-controller-manager-pause-245240   kube-system
	993bc385c7eab       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   25 seconds ago       Running             kube-apiserver            1                   dab5c05b7de15       kube-apiserver-pause-245240            kube-system
	632f540de17ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   cb7a659b3c4b5       kindnet-sq8vx                          kube-system
	25758a2f97099       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   9ac711ec07cbe       coredns-66bc5c9577-xbq8z               kube-system
	4692783a7119a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   25 seconds ago       Running             kube-proxy                1                   2bef0c472188b       kube-proxy-vsqz2                       kube-system
	7d04b12f6f11a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   25 seconds ago       Running             etcd                      1                   73042cdc2e4ad       etcd-pause-245240                      kube-system
	c3454e38c0091       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   9ac711ec07cbe       coredns-66bc5c9577-xbq8z               kube-system
	4828bd44aea43       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   cb7a659b3c4b5       kindnet-sq8vx                          kube-system
	e86c30e4e2616       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   2bef0c472188b       kube-proxy-vsqz2                       kube-system
	a19dba52bf31e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   f94c7acd315ed       kube-controller-manager-pause-245240   kube-system
	c5435f90e719a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   73042cdc2e4ad       etcd-pause-245240                      kube-system
	dcfbc31b64e74       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   dab5c05b7de15       kube-apiserver-pause-245240            kube-system
	4b3239d2756fd       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   f1f1f2eb95066       kube-scheduler-pause-245240            kube-system
	
	
	==> coredns [25758a2f97099b15dd6a39598a2978fc9d586fb8cd9399a75480b182727d437d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49277 - 7162 "HINFO IN 5982923945174560406.3285165719839043450. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011612449s
	
	
	==> coredns [c3454e38c0091545c6a26ed711bda1d653d8308c28b941594fa43bc7a9ab2937] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50923 - 50337 "HINFO IN 2255500921777626642.6403281054394144409. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0152116s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-245240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-245240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=pause-245240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T10_44_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 10:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-245240
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 10:45:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:44:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:44:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:44:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 10:45:19 +0000   Mon, 24 Nov 2025 10:45:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-245240
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1ead4f7d-870c-4125-bc37-6f030fad8409
	  Boot ID:                    27a92f9c-55a4-4798-92be-317cdb891088
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xbq8z                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-245240                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-sq8vx                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-245240             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-245240    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-vsqz2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-245240             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Warning  CgroupV1                 94s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  94s (x8 over 94s)  kubelet          Node pause-245240 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s (x8 over 94s)  kubelet          Node pause-245240 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     94s (x8 over 94s)  kubelet          Node pause-245240 status is now: NodeHasSufficientPID
	  Normal   Starting                 85s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-245240 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-245240 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-245240 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-245240 event: Registered Node pause-245240 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-245240 status is now: NodeReady
	  Normal   RegisteredNode           17s                node-controller  Node pause-245240 event: Registered Node pause-245240 in Controller
	
	
	==> dmesg <==
	[ +29.372278] overlayfs: idmapped layers are currently not supported
	[Nov24 10:17] overlayfs: idmapped layers are currently not supported
	[Nov24 10:18] overlayfs: idmapped layers are currently not supported
	[  +3.899881] overlayfs: idmapped layers are currently not supported
	[Nov24 10:19] overlayfs: idmapped layers are currently not supported
	[ +41.367824] overlayfs: idmapped layers are currently not supported
	[Nov24 10:21] overlayfs: idmapped layers are currently not supported
	[Nov24 10:26] overlayfs: idmapped layers are currently not supported
	[ +33.890897] overlayfs: idmapped layers are currently not supported
	[Nov24 10:28] overlayfs: idmapped layers are currently not supported
	[Nov24 10:29] overlayfs: idmapped layers are currently not supported
	[Nov24 10:30] overlayfs: idmapped layers are currently not supported
	[Nov24 10:32] overlayfs: idmapped layers are currently not supported
	[ +26.643756] overlayfs: idmapped layers are currently not supported
	[  +9.285653] overlayfs: idmapped layers are currently not supported
	[Nov24 10:33] overlayfs: idmapped layers are currently not supported
	[ +18.325038] overlayfs: idmapped layers are currently not supported
	[Nov24 10:34] overlayfs: idmapped layers are currently not supported
	[Nov24 10:35] overlayfs: idmapped layers are currently not supported
	[Nov24 10:36] overlayfs: idmapped layers are currently not supported
	[Nov24 10:37] overlayfs: idmapped layers are currently not supported
	[Nov24 10:39] overlayfs: idmapped layers are currently not supported
	[Nov24 10:41] overlayfs: idmapped layers are currently not supported
	[ +25.006505] overlayfs: idmapped layers are currently not supported
	[Nov24 10:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7d04b12f6f11a5a508fafd445c9fbafeb2d5fbb41c9206693db9d7b163d59c81] <==
	{"level":"warn","ts":"2025-11-24T10:45:35.455002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.474098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.541521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.542339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.555755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.593943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.614141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.634291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.662585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.687872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.702001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.715545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.746733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.753179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.773718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.790130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.811094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.830376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.841599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.862388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.892306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.913355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.933874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:35.949978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:45:36.083623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33862","server-name":"","error":"EOF"}
	
	
	==> etcd [c5435f90e719aa2779bbe5f3b217b6402f384412c6e48e6340e0d29d24bbe98b] <==
	{"level":"warn","ts":"2025-11-24T10:44:28.255197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.279293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.315701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.331110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.367626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.373851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T10:44:28.467412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51552","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T10:45:23.252255Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T10:45:23.252312Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-245240","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-24T10:45:23.252438Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T10:45:23.396857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T10:45:23.398371Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398421Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-24T10:45:23.398457Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398469Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398557Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T10:45:23.398588Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T10:45:23.398599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-24T10:45:23.398562Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T10:45:23.398627Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T10:45:23.398644Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T10:45:23.401989Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-24T10:45:23.402068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T10:45:23.402095Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T10:45:23.402109Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-245240","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 10:45:58 up  9:28,  0 user,  load average: 3.68, 3.01, 2.30
	Linux pause-245240 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4828bd44aea438cda942b71c2f80e7f2d601bc85ff99b48cf48e14a03bfef35f] <==
	I1124 10:44:39.010741       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 10:44:39.010990       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 10:44:39.011117       1 main.go:148] setting mtu 1500 for CNI 
	I1124 10:44:39.011127       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 10:44:39.011139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T10:44:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 10:44:39.207333       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 10:44:39.207350       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 10:44:39.207359       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 10:44:39.207621       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 10:45:09.207100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 10:45:09.208318       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 10:45:09.208434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 10:45:09.208568       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 10:45:10.607515       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 10:45:10.607557       1 metrics.go:72] Registering metrics
	I1124 10:45:10.607626       1 controller.go:711] "Syncing nftables rules"
	I1124 10:45:19.213186       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 10:45:19.213304       1 main.go:301] handling current node
	
	
	==> kindnet [632f540de17ba0538f526e70308122e739fd97ce7682b24f147c31e556bc48c0] <==
	I1124 10:45:32.360070       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 10:45:32.360435       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 10:45:32.360558       1 main.go:148] setting mtu 1500 for CNI 
	I1124 10:45:32.360571       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 10:45:32.360583       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T10:45:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 10:45:32.563208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 10:45:32.563225       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 10:45:32.563233       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 10:45:32.563518       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 10:45:37.365198       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 10:45:37.365249       1 metrics.go:72] Registering metrics
	I1124 10:45:37.365336       1 controller.go:711] "Syncing nftables rules"
	I1124 10:45:42.564952       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 10:45:42.565019       1 main.go:301] handling current node
	I1124 10:45:52.562793       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 10:45:52.562848       1 main.go:301] handling current node
	
	
	==> kube-apiserver [993bc385c7eab5f97988e6c19ab44c8c8fab5331f9949f0c815eea2c4b1fff82] <==
	I1124 10:45:37.230373       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 10:45:37.241434       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 10:45:37.241531       1 policy_source.go:240] refreshing policies
	I1124 10:45:37.262622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 10:45:37.269546       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 10:45:37.273768       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 10:45:37.274223       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 10:45:37.314015       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 10:45:37.314079       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 10:45:37.323888       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 10:45:37.324262       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 10:45:37.325062       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 10:45:37.325251       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 10:45:37.325317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 10:45:37.325640       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 10:45:37.325690       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 10:45:37.325721       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 10:45:37.337706       1 cache.go:39] Caches are synced for autoregister controller
	E1124 10:45:37.354194       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 10:45:37.971842       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 10:45:39.216528       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 10:45:40.590059       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 10:45:40.732678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 10:45:40.930760       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 10:45:40.979995       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [dcfbc31b64e74a5e46f6371b921ce733ba64c3b3efc2c060d173e151a9a78cd6] <==
	W1124 10:45:23.275405       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.275516       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.275574       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278361       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278440       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278487       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278531       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278576       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278617       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278659       1 logging.go:55] [core] [Channel #25 SubChannel #27]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278698       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278738       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278779       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278824       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278864       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278910       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278947       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.278991       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279027       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279071       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279108       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279144       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279388       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.279552       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 10:45:23.280174       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [97aee88b9e3dc396ba7d34dff4c57500f0b7aad912cad0a5ea2bad5a3a73ea23] <==
	I1124 10:45:40.577803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 10:45:40.577913       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-245240"
	I1124 10:45:40.577979       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 10:45:40.578428       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 10:45:40.578605       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 10:45:40.581235       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 10:45:40.581281       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 10:45:40.585170       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 10:45:40.586636       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 10:45:40.586732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:45:40.587928       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 10:45:40.597455       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:45:40.600616       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 10:45:40.602797       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 10:45:40.605084       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 10:45:40.608486       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 10:45:40.609719       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 10:45:40.623763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 10:45:40.623854       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 10:45:40.623869       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 10:45:40.624726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 10:45:40.626752       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 10:45:40.637946       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 10:45:40.640293       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 10:45:40.645617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164] <==
	I1124 10:44:37.376840       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 10:44:37.376942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 10:44:37.378078       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 10:44:37.384051       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 10:44:37.384459       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 10:44:37.393927       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 10:44:37.394255       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 10:44:37.394340       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 10:44:37.400088       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 10:44:37.400855       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:44:37.391262       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 10:44:37.402640       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 10:44:37.402751       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 10:44:37.402806       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 10:44:37.412379       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 10:44:37.385170       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 10:44:37.415457       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 10:44:37.440686       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 10:44:37.454625       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 10:44:37.458029       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-245240" podCIDRs=["10.244.0.0/24"]
	I1124 10:44:37.459626       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 10:44:37.460602       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 10:44:37.460649       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 10:44:37.459661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 10:45:22.330995       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4692783a7119a447e30c4790015ec768306e9d95b6002420dd316dad375eab36] <==
	I1124 10:45:35.900849       1 server_linux.go:53] "Using iptables proxy"
	I1124 10:45:36.978141       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 10:45:37.390115       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 10:45:37.390336       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 10:45:37.390487       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 10:45:37.606255       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 10:45:37.606371       1 server_linux.go:132] "Using iptables Proxier"
	I1124 10:45:37.621203       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 10:45:37.621623       1 server.go:527] "Version info" version="v1.34.2"
	I1124 10:45:37.623332       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:45:37.624701       1 config.go:200] "Starting service config controller"
	I1124 10:45:37.624758       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 10:45:37.624803       1 config.go:106] "Starting endpoint slice config controller"
	I1124 10:45:37.624857       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 10:45:37.624897       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 10:45:37.624943       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 10:45:37.628936       1 config.go:309] "Starting node config controller"
	I1124 10:45:37.629021       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 10:45:37.629052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 10:45:37.725781       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 10:45:37.729197       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 10:45:37.729233       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e86c30e4e2616da17c57bf36566f76243193a68aebf68df8f0a2e44a99680d1c] <==
	I1124 10:44:39.010681       1 server_linux.go:53] "Using iptables proxy"
	I1124 10:44:39.107084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 10:44:39.207994       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 10:44:39.208034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 10:44:39.208100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 10:44:39.275608       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 10:44:39.275737       1 server_linux.go:132] "Using iptables Proxier"
	I1124 10:44:39.279610       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 10:44:39.280078       1 server.go:527] "Version info" version="v1.34.2"
	I1124 10:44:39.281169       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:44:39.282499       1 config.go:200] "Starting service config controller"
	I1124 10:44:39.282546       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 10:44:39.282565       1 config.go:106] "Starting endpoint slice config controller"
	I1124 10:44:39.282569       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 10:44:39.282580       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 10:44:39.282583       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 10:44:39.283291       1 config.go:309] "Starting node config controller"
	I1124 10:44:39.283339       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 10:44:39.283368       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 10:44:39.382983       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 10:44:39.383030       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 10:44:39.382982       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0] <==
	I1124 10:44:29.322133       1 serving.go:386] Generated self-signed cert in-memory
	I1124 10:44:31.891727       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 10:44:31.891760       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:44:31.896471       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 10:44:31.896518       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 10:44:31.896554       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:44:31.896562       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:44:31.896575       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:44:31.896589       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:44:31.897009       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 10:44:31.909480       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 10:44:31.996698       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 10:44:31.996650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:44:31.996848       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:23.247853       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 10:45:23.247887       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 10:45:23.247906       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 10:45:23.247931       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:23.247960       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:45:23.247978       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1124 10:45:23.248253       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 10:45:23.248293       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7a603655c81fe14ed0a34e6eb42ebbeececb50ef143028a772733d12ae7d7775] <==
	I1124 10:45:36.538992       1 serving.go:386] Generated self-signed cert in-memory
	I1124 10:45:38.072443       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1124 10:45:38.072587       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 10:45:38.083491       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 10:45:38.083668       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 10:45:38.083758       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 10:45:38.083829       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 10:45:38.093350       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:45:38.093674       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 10:45:38.093728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:38.093770       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:38.184513       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 10:45:38.194272       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 10:45:38.195393       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.061434    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsqz2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c11f67f-7449-4aac-83be-3dd80c495669" pod="kube-system/kube-proxy-vsqz2"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.061629    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-sq8vx\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="396e6ff1-b0f2-4848-8adb-5c3752c2eb23" pod="kube-system/kindnet-sq8vx"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.062263    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xbq8z\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d9af75b1-2d5c-4114-b82d-eaaa86add98e" pod="kube-system/coredns-66bc5c9577-xbq8z"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: I1124 10:45:32.066850    1300 scope.go:117] "RemoveContainer" containerID="4b3239d2756fd8a64b17008debfb20aac0fc5ca98562d5297aa146a70fb595e0"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067331    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d1e0521098c6a05af3ffd81f3a6f83e" pod="kube-system/kube-scheduler-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067507    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f3c88e2d69300286a68b4bae07303b03" pod="kube-system/etcd-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067667    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5016a6264a8c350870f6cea806c9c026" pod="kube-system/kube-apiserver-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067824    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsqz2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c11f67f-7449-4aac-83be-3dd80c495669" pod="kube-system/kube-proxy-vsqz2"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.067980    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-sq8vx\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="396e6ff1-b0f2-4848-8adb-5c3752c2eb23" pod="kube-system/kindnet-sq8vx"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.068139    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xbq8z\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d9af75b1-2d5c-4114-b82d-eaaa86add98e" pod="kube-system/coredns-66bc5c9577-xbq8z"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: I1124 10:45:32.070347    1300 scope.go:117] "RemoveContainer" containerID="a19dba52bf31e60db17671e1f573be72e57899f6b6be80cdea9232c590672164"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.070785    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsqz2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1c11f67f-7449-4aac-83be-3dd80c495669" pod="kube-system/kube-proxy-vsqz2"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.070958    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-sq8vx\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="396e6ff1-b0f2-4848-8adb-5c3752c2eb23" pod="kube-system/kindnet-sq8vx"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071112    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xbq8z\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d9af75b1-2d5c-4114-b82d-eaaa86add98e" pod="kube-system/coredns-66bc5c9577-xbq8z"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071268    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c3ffec50de57a0792b9e7ed063c8ccc5" pod="kube-system/kube-controller-manager-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071424    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d1e0521098c6a05af3ffd81f3a6f83e" pod="kube-system/kube-scheduler-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071599    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f3c88e2d69300286a68b4bae07303b03" pod="kube-system/etcd-pause-245240"
	Nov 24 10:45:32 pause-245240 kubelet[1300]: E1124 10:45:32.071761    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-245240\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5016a6264a8c350870f6cea806c9c026" pod="kube-system/kube-apiserver-pause-245240"
	Nov 24 10:45:37 pause-245240 kubelet[1300]: E1124 10:45:37.143418    1300 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-245240\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-245240' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 24 10:45:37 pause-245240 kubelet[1300]: E1124 10:45:37.144772    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-245240\" is forbidden: User \"system:node:pause-245240\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-245240' and this object" podUID="c3ffec50de57a0792b9e7ed063c8ccc5" pod="kube-system/kube-controller-manager-pause-245240"
	Nov 24 10:45:37 pause-245240 kubelet[1300]: E1124 10:45:37.186596    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-245240\" is forbidden: User \"system:node:pause-245240\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-245240' and this object" podUID="9d1e0521098c6a05af3ffd81f3a6f83e" pod="kube-system/kube-scheduler-pause-245240"
	Nov 24 10:45:43 pause-245240 kubelet[1300]: W1124 10:45:43.051144    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 10:45:52 pause-245240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 10:45:52 pause-245240 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 10:45:52 pause-245240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-245240 -n pause-245240
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-245240 -n pause-245240: exit status 2 (366.763305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-245240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.071s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1124 11:11:07.139205 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/default-k8s-diff-port-329370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (25m49s)
		TestStartStop (28m33s)
		TestStartStop/group/newest-cni (10m11s)
		TestStartStop/group/newest-cni/serial (10m11s)
		TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1m34s)
		TestStartStop/group/no-preload (17m55s)
		TestStartStop/group/no-preload/serial (17m55s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (1m25s)

                                                
                                                
goroutine 5954 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000103340, 0x400075dbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400013a0c0, {0x534c580, 0x2c, 0x2c}, {0x400075dd08?, 0x125774?, 0x5374f20?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40013145a0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40013145a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 183 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe8a0, {{0x36f3450, 0x4000224080?}, 0x4000ac4c40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 174
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 882 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe8a0, {{0x36f3450, 0x4000224080?}, 0x4000ae5500?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 881
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 194 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4001a10890, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001a10880)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701da0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001ac2ba0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000082f50?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b30?, 0x4000082150?}, 0x40000a06a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b30, 0x4000082150}, 0x4000a9df38, {0x369d6a0, 0x40015eaab0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40000a07a8?, {0x369d6a0?, 0x40015eaab0?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015fc810, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 184
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 2242 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x400067db00, 0x4000107f80)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2241
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 196 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 195
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 184 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001ac2ba0, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 174
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5561 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe8a0, {{0x36f3450, 0x4000224080?}, 0x400155c480?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5538
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 195 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b30, 0x4000082150}, 0x40000a1740, 0x400130af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b30, 0x4000082150}, 0x40?, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b30?, 0x4000082150?}, 0x161f90?, 0x4001912fc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400067c600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 184
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5269 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400147ad80, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5260
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5754 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0x40002fa780, 0x40022b6310)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5751
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4944 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001538e00, 0x339b718)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4770
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5567 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5566
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 678 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff38fcf200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400040b680?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400040b680)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400040b680)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001950080)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001950080)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004e6300, {0x36d3140, 0x4001950080})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004e6300)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 676
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4964 [chan receive, 17 minutes]:
testing.(*T).Run(0x4001539500, {0x296e9ac?, 0x0?}, 0x40002b6f00)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001539500)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001539500, 0x4001950d40)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4944
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3587 [chan send, 64 minutes]:
os/exec.(*Cmd).watchCtx(0x40002faf00, 0x40017068c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3586
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5679 [chan receive, 1 minutes]:
testing.(*T).Run(0x4001c19c00, {0x2999f72?, 0x40000006ee?}, 0x40012f0980)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4001c19c00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4001c19c00, 0x40002b6400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4962
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5016 [chan receive, 26 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001539dc0, 0x400013bbc0)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4716
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5272 [chan receive]:
testing.(*T).Run(0x40015e5180, {0x2999f88?, 0x40000006ee?}, 0x40002b6500)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40015e5180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40015e5180, 0x40002b6f00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4964
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5760 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x400133cd50, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400133cd40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701da0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400160ed20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40003daf50?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b30?, 0x4000082150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b30, 0x4000082150}, 0x4000a9bf38, {0x369d6a0, 0x400198ed50}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3450?, {0x369d6a0?, 0x400198ed50?}, 0x70?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000aab160, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5757
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4716 [chan receive, 26 minutes]:
testing.(*T).Run(0x4001c18e00, {0x296d53a?, 0x1e6b9160eee1?}, 0x400013bbc0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x4001c18e00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x4001c18e00, 0x339b4e8)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2340 [chan send, 95 minutes]:
os/exec.(*Cmd).watchCtx(0x40016bcd80, 0x4001a2b2d0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 790
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 888 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b30, 0x4000082150}, 0x40004f0740, 0x400010ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b30, 0x4000082150}, 0x84?, 0x40004f0740, 0x40004f0788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b30?, 0x4000082150?}, 0x40005f71d0?, 0x40005f7470?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400067d200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 883
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5080 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x4000716c30)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001c18c40)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001c18c40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001c18c40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001c18c40, 0x40012f0b80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5016
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 887 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40006ee190, 0x2a)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006ee180)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701da0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000b0bb60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004f7f18?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b30?, 0x4000082150?}, 0x40004f7ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b30, 0x4000082150}, 0x4000a9cf38, {0x369d6a0, 0x4001985ad0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40004f7fa8?, {0x369d6a0?, 0x4001985ad0?}, 0xf0?, 0x4001911c80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001976d40, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 883
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 889 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 888
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 883 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000b0bb60, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 881
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5154 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x4000716c30)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015e4e00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015e4e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015e4e00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015e4e00, 0x40002b6180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5016
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5081 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x4000716c30)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001c19500)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001c19500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001c19500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001c19500, 0x40012f0c00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5016
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3563 [chan send, 64 minutes]:
os/exec.(*Cmd).watchCtx(0x400067d200, 0x40016b2770)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3562
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3667 [chan send, 64 minutes]:
os/exec.(*Cmd).watchCtx(0x400067c600, 0x40022b6e70)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2979
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3165 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe8a0, {{0x36f3450, 0x4000224080?}, 0x4000adcfc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3164
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5268 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe8a0, {{0x36f3450, 0x4000224080?}, 0x400067c000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5260
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5017 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x4000716c30)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015e4380)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015e4380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015e4380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015e4380, 0x400040ab80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5016
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2812 [IO wait, 94 minutes]:
internal/poll.runtime_pollWait(0xffff38fcf400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40002b6080?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40002b6080)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40002b6080)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40007f2600)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40007f2600)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000152300, {0x36d3140, 0x40007f2600})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000152300)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 2810
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5753 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0xffff38cea000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40022bc540?, 0x400043a800?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40022bc540, {0x400043a800, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40007100b8, {0x400043a800?, 0x400165cd68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400170a390, {0x369ba78, 0x4000710148})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc60, 0x400170a390}, {0x369ba78, 0x4000710148}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40007100b8?, {0x369bc60, 0x400170a390})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40007100b8, {0x369bc60, 0x400170a390})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc60, 0x400170a390}, {0x369baf8, 0x40007100b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40015e48c0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5751
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5794 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5793
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5793 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b30, 0x4000082150}, 0x400134ef40, 0x400134ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b30, 0x4000082150}, 0x30?, 0x400134ef40, 0x400134ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b30?, 0x4000082150?}, 0x726f506465736f70?, 0x694c205d5b3a7374?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400067c180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5757
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5756 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe8a0, {{0x36f3450, 0x4000224080?}, 0x40015e48c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5755
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5751 [syscall, 1 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x4001345a58, 0x4, 0x40016c83f0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4001345bb8?, 0x1929a0?, 0xffffd861019f?, 0x0?, 0x40012f0a00?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x400133ca00)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4001345b88?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x40002fa780)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x40002fa780)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40015e5880, 0x40002fa780)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateEnableAddonWhileActive({0x36e5798, 0x4000276bd0}, 0x40015e5880, {0x40017280f0, 0x11}, {0x29784dc, 0xa}, {0x69243d3d?, 0x4001493f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:203 +0x12c
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40015e5880?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40015e5880, 0x40012f0980)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5679
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5752 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0xffff38fcfc00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40022bc2a0?, 0x40016e050b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40022bc2a0, {0x40016e050b, 0x2f5, 0x2f5})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4000710090, {0x40016e050b?, 0x4001659568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400170a360, {0x369ba78, 0x4000710138})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc60, 0x400170a360}, {0x369ba78, 0x4000710138}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4000710090?, {0x369bc60, 0x400170a360})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4000710090, {0x369bc60, 0x400170a360})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc60, 0x400170a360}, {0x369baf8, 0x4000710090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40015e5880?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5751
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5566 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b30, 0x4000082150}, 0x40004f2f40, 0x40004f2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b30, 0x4000082150}, 0x78?, 0x40004f2f40, 0x40004f2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b30?, 0x4000082150?}, 0x161f90?, 0x4001538fc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40013de480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5562
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5755 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e5798, 0x40003db110}, {0x36d37a0, 0x40017a5200}, 0x1, 0x0, 0x4000acfbe0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e5798?, 0x4000312a10?}, 0x3b9aca00, 0x4000acfe08?, 0x1, 0x4000acfbe0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e5798, 0x4000312a10}, 0x40015e48c0, {0x4001728930, 0x11}, {0x2993f7b, 0x14}, {0x29abe42, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:379 +0x22c
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36e5798, 0x4000312a10}, 0x40015e48c0, {0x4001728930, 0x11}, {0x29784e6?, 0x4ab7b9a00161e84?}, {0x69243d46?, 0x4001498f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:272 +0xf8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40015e48c0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40015e48c0, 0x40002b6500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5272
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2277 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x4001554600, 0x40022b69a0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2276
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5082 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x4000716c30)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001c196c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001c196c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001c196c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001c196c0, 0x40012f0c80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5016
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4770 [chan receive, 30 minutes]:
testing.(*T).Run(0x4001c19180, {0x296d53a?, 0x4001435f58?}, 0x339b718)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001c19180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001c19180, 0x339b530)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5281 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b30, 0x4000082150}, 0x400134ef40, 0x4001432f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b30, 0x4000082150}, 0x30?, 0x400134ef40, 0x400134ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b30?, 0x4000082150?}, 0x726f506465736f70?, 0x694c205d5b3a7374?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400067c180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5269
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5757 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400160ed20, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5755
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5282 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5281
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5562 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400147bf20, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5538
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3169 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b30, 0x4000082150}, 0x40014aa740, 0x4001308f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b30, 0x4000082150}, 0x51?, 0x40014aa740, 0x40014aa788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b30?, 0x4000082150?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400067dc80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5264 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x400133cd10, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400133cd00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701da0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400147ad80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40016b2930?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b30?, 0x4000082150?}, 0x40004f06a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b30, 0x4000082150}, 0x400010ff38, {0x369d6a0, 0x400198e690}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40004f07a8?, {0x369d6a0?, 0x400198e690?}, 0x10?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400190e540, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5269
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5565 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40019515d0, 0x12)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019515c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701da0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400147bf20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x52e1620?, 0x40002a6d30?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b30?, 0x4000082150?}, 0x40002a6da0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b30, 0x4000082150}, 0x4001444f38, {0x369d6a0, 0x40016f9410}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40002a6d20?, {0x369d6a0?, 0x40016f9410?}, 0x20?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400145d520, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5562
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3170 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3169
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3136 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x400133d610, 0x20)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400133d600)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701da0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400160ec00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014a7e88?, 0x2a0ac?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b30?, 0x4000082150?}, 0xffff7fbf95c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b30, 0x4000082150}, 0x4001446f38, {0x369d6a0, 0x4000620c00}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3450?, {0x369d6a0?, 0x4000620c00?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400132e710, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3166 [chan receive, 66 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400160ec00, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3164
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5079 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x4000716c30)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001c181c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001c181c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001c181c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001c181c0, 0x40012f0b00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5016
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4962 [chan receive, 10 minutes]:
testing.(*T).Run(0x4001539180, {0x296e9ac?, 0x0?}, 0x40002b6400)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001539180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001539180, 0x4001950cc0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4944
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5155 [chan receive, 26 minutes]:
testing.(*testState).waitParallel(0x4000716c30)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015e4fc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015e4fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015e4fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015e4fc0, 0x40002b6200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5016
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                    

Test pass (222/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.64
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 6.33
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.25
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.47
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0.47
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 169.14
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 9.85
57 TestAddons/StoppedEnableDisable 12.64
58 TestCertOptions 36.36
59 TestCertExpiration 326.1
61 TestForceSystemdFlag 37.32
62 TestForceSystemdEnv 34.43
67 TestErrorSpam/setup 33.63
68 TestErrorSpam/start 0.82
69 TestErrorSpam/status 1.18
70 TestErrorSpam/pause 6.86
71 TestErrorSpam/unpause 5.52
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 77.44
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 29.71
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
84 TestFunctional/serial/CacheCmd/cache/add_local 1.13
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 59.52
93 TestFunctional/serial/ComponentHealth 0.13
94 TestFunctional/serial/LogsCmd 1.54
95 TestFunctional/serial/LogsFileCmd 1.51
96 TestFunctional/serial/InvalidService 3.95
98 TestFunctional/parallel/ConfigCmd 0.45
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.2
102 TestFunctional/parallel/StatusCmd 1.02
107 TestFunctional/parallel/AddonsCmd 0.17
110 TestFunctional/parallel/SSHCmd 0.75
111 TestFunctional/parallel/CpCmd 2.42
113 TestFunctional/parallel/FileSync 0.28
114 TestFunctional/parallel/CertSync 1.7
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
122 TestFunctional/parallel/License 0.28
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.75
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
136 TestFunctional/parallel/ProfileCmd/profile_list 0.45
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
138 TestFunctional/parallel/MountCmd/any-port 7.77
139 TestFunctional/parallel/MountCmd/specific-port 2.15
140 TestFunctional/parallel/MountCmd/VerifyCleanup 2.2
141 TestFunctional/parallel/ServiceCmd/List 1.33
142 TestFunctional/parallel/ServiceCmd/JSONOutput 1.33
146 TestFunctional/parallel/Version/short 0.06
147 TestFunctional/parallel/Version/components 1
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.88
153 TestFunctional/parallel/ImageCommands/Setup 0.62
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.57
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.95
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.9
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.01
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.55
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.23
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.4
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.15
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.72
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.46
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.76
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.67
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.45
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.67
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.27
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 196.95
265 TestMultiControlPlane/serial/DeployApp 7.05
266 TestMultiControlPlane/serial/PingHostFromPods 1.51
267 TestMultiControlPlane/serial/AddWorkerNode 58.99
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
270 TestMultiControlPlane/serial/CopyFile 20.36
271 TestMultiControlPlane/serial/StopSecondaryNode 12.9
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
273 TestMultiControlPlane/serial/RestartSecondaryNode 20.64
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.53
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 122.98
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.95
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
278 TestMultiControlPlane/serial/StopCluster 36.39
279 TestMultiControlPlane/serial/RestartCluster 84.03
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
281 TestMultiControlPlane/serial/AddSecondaryNode 85.45
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
287 TestJSONOutput/start/Command 81.34
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.81
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 39.34
313 TestKicCustomNetwork/use_default_bridge_network 34.24
314 TestKicExistingNetwork 38.86
315 TestKicCustomSubnet 37.89
316 TestKicStaticIP 37.95
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 74.95
321 TestMountStart/serial/StartWithMountFirst 8.73
322 TestMountStart/serial/VerifyMountFirst 0.27
323 TestMountStart/serial/StartWithMountSecond 8.51
324 TestMountStart/serial/VerifyMountSecond 0.29
325 TestMountStart/serial/DeleteFirst 1.7
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.28
328 TestMountStart/serial/RestartStopped 7.94
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 140.03
333 TestMultiNode/serial/DeployApp2Nodes 6.41
334 TestMultiNode/serial/PingHostFrom2Pods 0.93
335 TestMultiNode/serial/AddNode 57.74
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.63
339 TestMultiNode/serial/StopNode 2.39
340 TestMultiNode/serial/StartAfterStop 8.22
341 TestMultiNode/serial/RestartKeepsNodes 78.9
342 TestMultiNode/serial/DeleteNode 5.72
343 TestMultiNode/serial/StopMultiNode 24
344 TestMultiNode/serial/RestartMultiNode 59.23
345 TestMultiNode/serial/ValidateNameConflict 37.54
350 TestPreload 133.01
352 TestScheduledStopUnix 108.74
355 TestInsufficientStorage 12.7
356 TestRunningBinaryUpgrade 52.49
359 TestMissingContainerUpgrade 117.99
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
362 TestNoKubernetes/serial/StartWithK8s 41.89
363 TestNoKubernetes/serial/StartWithStopK8s 98.27
364 TestNoKubernetes/serial/Start 9.08
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
367 TestNoKubernetes/serial/ProfileList 1.66
368 TestNoKubernetes/serial/Stop 1.55
369 TestNoKubernetes/serial/StartNoArgs 7.85
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
371 TestStoppedBinaryUpgrade/Setup 0.74
372 TestStoppedBinaryUpgrade/Upgrade 63.1
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
382 TestPause/serial/Start 87.02
383 TestPause/serial/SecondStartNoReconfiguration 29.95
x
+
TestDownloadOnly/v1.28.0/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-698929 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-698929 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.641787518s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 09:12:33.104622 1806704 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 09:12:33.105194 1806704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-698929
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-698929: exit status 85 (91.803832ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-698929 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-698929 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:12:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:12:27.509990 1806709 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:12:27.510137 1806709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:27.510183 1806709 out.go:374] Setting ErrFile to fd 2...
	I1124 09:12:27.510196 1806709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:27.510480 1806709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	W1124 09:12:27.510638 1806709 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21978-1804834/.minikube/config/config.json: open /home/jenkins/minikube-integration/21978-1804834/.minikube/config/config.json: no such file or directory
	I1124 09:12:27.511097 1806709 out.go:368] Setting JSON to true
	I1124 09:12:27.511983 1806709 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28498,"bootTime":1763947050,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:12:27.512052 1806709 start.go:143] virtualization:  
	I1124 09:12:27.517741 1806709 out.go:99] [download-only-698929] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1124 09:12:27.517916 1806709 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 09:12:27.517980 1806709 notify.go:221] Checking for updates...
	I1124 09:12:27.521303 1806709 out.go:171] MINIKUBE_LOCATION=21978
	I1124 09:12:27.524799 1806709 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:12:27.528150 1806709 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:12:27.531462 1806709 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:12:27.534712 1806709 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 09:12:27.540992 1806709 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 09:12:27.541336 1806709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:12:27.569585 1806709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:12:27.569697 1806709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:27.622346 1806709 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 09:12:27.612878859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:27.622454 1806709 docker.go:319] overlay module found
	I1124 09:12:27.625585 1806709 out.go:99] Using the docker driver based on user configuration
	I1124 09:12:27.625631 1806709 start.go:309] selected driver: docker
	I1124 09:12:27.625639 1806709 start.go:927] validating driver "docker" against <nil>
	I1124 09:12:27.625768 1806709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:27.687977 1806709 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 09:12:27.678089544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:27.688141 1806709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:12:27.688434 1806709 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 09:12:27.688587 1806709 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 09:12:27.691833 1806709 out.go:171] Using Docker driver with root privileges
	I1124 09:12:27.694899 1806709 cni.go:84] Creating CNI manager for ""
	I1124 09:12:27.694984 1806709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:12:27.695002 1806709 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:12:27.695101 1806709 start.go:353] cluster config:
	{Name:download-only-698929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-698929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:12:27.698251 1806709 out.go:99] Starting "download-only-698929" primary control-plane node in "download-only-698929" cluster
	I1124 09:12:27.698283 1806709 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:12:27.701340 1806709 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:12:27.701395 1806709 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 09:12:27.701541 1806709 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:12:27.716256 1806709 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 09:12:27.717057 1806709 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 09:12:27.717189 1806709 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 09:12:27.763923 1806709 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 09:12:27.763952 1806709 cache.go:65] Caching tarball of preloaded images
	I1124 09:12:27.764166 1806709 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 09:12:27.767514 1806709 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 09:12:27.767538 1806709 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1124 09:12:27.856592 1806709 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1124 09:12:27.856722 1806709 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 09:12:31.370655 1806709 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 09:12:31.371058 1806709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/download-only-698929/config.json ...
	I1124 09:12:31.371097 1806709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/download-only-698929/config.json: {Name:mka5addb1c43079dd02e27d54f83edce057593e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:12:31.371752 1806709 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 09:12:31.372407 1806709 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-698929 host does not exist
	  To start a cluster, run: "minikube start -p download-only-698929"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-698929
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (6.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-432573 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-432573 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.330411386s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (6.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1124 09:12:39.896173 1806704 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1124 09:12:39.896207 1806704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-432573
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-432573: exit status 85 (90.965118ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-698929 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-698929 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-698929                                                                                                                                                   │ download-only-698929 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-432573 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-432573 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:12:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:12:33.606616 1806908 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:12:33.606748 1806908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:33.606759 1806908 out.go:374] Setting ErrFile to fd 2...
	I1124 09:12:33.606765 1806908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:33.607023 1806908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:12:33.607418 1806908 out.go:368] Setting JSON to true
	I1124 09:12:33.608242 1806908 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28504,"bootTime":1763947050,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:12:33.608312 1806908 start.go:143] virtualization:  
	I1124 09:12:33.611782 1806908 out.go:99] [download-only-432573] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:12:33.612063 1806908 notify.go:221] Checking for updates...
	I1124 09:12:33.615074 1806908 out.go:171] MINIKUBE_LOCATION=21978
	I1124 09:12:33.618221 1806908 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:12:33.621291 1806908 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:12:33.624206 1806908 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:12:33.627215 1806908 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 09:12:33.632923 1806908 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 09:12:33.633251 1806908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:12:33.658085 1806908 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:12:33.658205 1806908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:33.731187 1806908 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-24 09:12:33.721949729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:33.731291 1806908 docker.go:319] overlay module found
	I1124 09:12:33.734289 1806908 out.go:99] Using the docker driver based on user configuration
	I1124 09:12:33.734316 1806908 start.go:309] selected driver: docker
	I1124 09:12:33.734325 1806908 start.go:927] validating driver "docker" against <nil>
	I1124 09:12:33.734421 1806908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:33.787643 1806908 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-24 09:12:33.77824517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:33.787816 1806908 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:12:33.788111 1806908 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 09:12:33.788268 1806908 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 09:12:33.791442 1806908 out.go:171] Using Docker driver with root privileges
	I1124 09:12:33.794270 1806908 cni.go:84] Creating CNI manager for ""
	I1124 09:12:33.794336 1806908 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 09:12:33.794353 1806908 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:12:33.794430 1806908 start.go:353] cluster config:
	{Name:download-only-432573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-432573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:12:33.797357 1806908 out.go:99] Starting "download-only-432573" primary control-plane node in "download-only-432573" cluster
	I1124 09:12:33.797375 1806908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 09:12:33.800247 1806908 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:12:33.800288 1806908 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:12:33.800464 1806908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:12:33.816076 1806908 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 09:12:33.816199 1806908 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 09:12:33.816224 1806908 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 09:12:33.816229 1806908 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 09:12:33.816239 1806908 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 09:12:33.853635 1806908 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1124 09:12:33.853660 1806908 cache.go:65] Caching tarball of preloaded images
	I1124 09:12:33.854456 1806908 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1124 09:12:33.857642 1806908 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1124 09:12:33.857674 1806908 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1124 09:12:33.942899 1806908 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1124 09:12:33.942959 1806908 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-432573 host does not exist
	  To start a cluster, run: "minikube start -p download-only-432573"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-432573
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-533969 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-533969 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.465866022s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
I1124 09:12:43.998508 1806704 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
I1124 09:12:44.155403 1806704 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
I1124 09:12:44.314546 1806704 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-533969
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-533969: exit status 85 (88.325451ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-698929 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-698929 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-698929                                                                                                                                                          │ download-only-698929 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-432573 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-432573 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ delete  │ -p download-only-432573                                                                                                                                                          │ download-only-432573 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │ 24 Nov 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-533969 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-533969 │ jenkins │ v1.37.0 │ 24 Nov 25 09:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:12:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:12:40.431068 1807110 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:12:40.431266 1807110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:40.431304 1807110 out.go:374] Setting ErrFile to fd 2...
	I1124 09:12:40.431325 1807110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:12:40.431680 1807110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:12:40.432176 1807110 out.go:368] Setting JSON to true
	I1124 09:12:40.433162 1807110 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28511,"bootTime":1763947050,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:12:40.433268 1807110 start.go:143] virtualization:  
	I1124 09:12:40.436786 1807110 out.go:99] [download-only-533969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:12:40.437084 1807110 notify.go:221] Checking for updates...
	I1124 09:12:40.440081 1807110 out.go:171] MINIKUBE_LOCATION=21978
	I1124 09:12:40.443194 1807110 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:12:40.446148 1807110 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:12:40.449170 1807110 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:12:40.452151 1807110 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 09:12:40.457954 1807110 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 09:12:40.458260 1807110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:12:40.481090 1807110 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:12:40.481209 1807110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:40.535594 1807110 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 09:12:40.52643333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:40.535716 1807110 docker.go:319] overlay module found
	I1124 09:12:40.538809 1807110 out.go:99] Using the docker driver based on user configuration
	I1124 09:12:40.538867 1807110 start.go:309] selected driver: docker
	I1124 09:12:40.538875 1807110 start.go:927] validating driver "docker" against <nil>
	I1124 09:12:40.538984 1807110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:12:40.600357 1807110 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 09:12:40.590893645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:12:40.600519 1807110 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:12:40.600835 1807110 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 09:12:40.601003 1807110 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 09:12:40.604194 1807110 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-533969 host does not exist
	  To start a cluster, run: "minikube start -p download-only-533969"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-533969
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 09:12:45.921365 1806704 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-891208 --alsologtostderr --binary-mirror http://127.0.0.1:41177 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-891208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-891208
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-048116
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-048116: exit status 85 (75.507271ms)

                                                
                                                
-- stdout --
	* Profile "addons-048116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-048116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-048116
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-048116: exit status 85 (80.184289ms)

                                                
                                                
-- stdout --
	* Profile "addons-048116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-048116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (169.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-048116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-048116 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.138343906s)
--- PASS: TestAddons/Setup (169.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-048116 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-048116 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-048116 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-048116 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [46c2bdf3-43ee-4778-959c-9523d8d1f256] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [46c2bdf3-43ee-4778-959c-9523d8d1f256] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004580477s
addons_test.go:694: (dbg) Run:  kubectl --context addons-048116 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-048116 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-048116 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-048116 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.64s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-048116
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-048116: (12.34766896s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-048116
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-048116
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-048116
--- PASS: TestAddons/StoppedEnableDisable (12.64s)

                                                
                                    
x
+
TestCertOptions (36.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-041668 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1124 10:52:54.299909 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-041668 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.527990903s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-041668 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-041668 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-041668 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-041668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-041668
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-041668: (2.093356461s)
--- PASS: TestCertOptions (36.36s)

                                                
                                    
x
+
TestCertExpiration (326.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-352809 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1124 10:47:54.299831 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-352809 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.620890768s)
E1124 10:49:17.373531 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:50:36.851039 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:50:53.142665 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-352809 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-352809 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m48.984153144s)
helpers_test.go:175: Cleaning up "cert-expiration-352809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-352809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-352809: (2.48952438s)
--- PASS: TestCertExpiration (326.10s)

                                                
                                    
x
+
TestForceSystemdFlag (37.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-170811 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-170811 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.371751517s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-170811 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-170811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-170811
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-170811: (2.61122545s)
--- PASS: TestForceSystemdFlag (37.32s)

                                                
                                    
x
+
TestForceSystemdEnv (34.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-478222 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-478222 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.877730301s)
helpers_test.go:175: Cleaning up "force-systemd-env-478222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-478222
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-478222: (2.548883861s)
--- PASS: TestForceSystemdEnv (34.43s)

                                                
                                    
x
+
TestErrorSpam/setup (33.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-722551 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-722551 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-722551 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-722551 --driver=docker  --container-runtime=crio: (33.628385146s)
--- PASS: TestErrorSpam/setup (33.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (6.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause: exit status 80 (2.516685758s)

                                                
                                                
-- stdout --
	* Pausing node nospam-722551 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:19:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause: exit status 80 (1.998283751s)

                                                
                                                
-- stdout --
	* Pausing node nospam-722551 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause: exit status 80 (2.337383174s)

                                                
                                                
-- stdout --
	* Pausing node nospam-722551 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:19:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause: exit status 80 (1.733286204s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-722551 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:19:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause: exit status 80 (1.714928548s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-722551 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:19:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause: exit status 80 (2.071793977s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-722551 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.52s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 stop: (1.319319785s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-722551 --log_dir /tmp/nospam-722551 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-498341 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1124 09:20:36.855170 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:36.864858 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:36.876420 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:36.897809 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:36.939299 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:37.020909 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:37.182531 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:37.504190 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:38.146388 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:39.427692 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:41.989129 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:47.111171 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:20:57.352629 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-498341 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.435381874s)
--- PASS: TestFunctional/serial/StartWithProxy (77.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 09:21:08.348536 1806704 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-498341 --alsologtostderr -v=8
E1124 09:21:17.833977 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-498341 --alsologtostderr -v=8: (29.702508703s)
functional_test.go:678: soft start took 29.708440661s for "functional-498341" cluster.
I1124 09:21:38.051379 1806704 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (29.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-498341 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 cache add registry.k8s.io/pause:3.1: (1.23660418s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 cache add registry.k8s.io/pause:3.3: (1.225268912s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 cache add registry.k8s.io/pause:latest: (1.140569663s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-498341 /tmp/TestFunctionalserialCacheCmdcacheadd_local4184184916/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cache add minikube-local-cache-test:functional-498341
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cache delete minikube-local-cache-test:functional-498341
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-498341
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.639145ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 kubectl -- --context functional-498341 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-498341 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (59.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-498341 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 09:21:58.795757 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-498341 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.521527429s)
functional_test.go:776: restart took 59.521978783s for "functional-498341" cluster.
I1124 09:22:45.226228 1806704 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (59.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-498341 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 logs: (1.537367633s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 logs --file /tmp/TestFunctionalserialLogsFileCmd3920035545/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 logs --file /tmp/TestFunctionalserialLogsFileCmd3920035545/001/logs.txt: (1.506826561s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-498341 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-498341
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-498341: exit status 115 (402.034652ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31672 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-498341 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 config get cpus: exit status 14 (58.451283ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 config get cpus: exit status 14 (63.374047ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-498341 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-498341 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (200.108259ms)

                                                
                                                
-- stdout --
	* [functional-498341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:33:21.469824 1833472 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:33:21.469974 1833472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:33:21.470001 1833472 out.go:374] Setting ErrFile to fd 2...
	I1124 09:33:21.470020 1833472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:33:21.470303 1833472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:33:21.470768 1833472 out.go:368] Setting JSON to false
	I1124 09:33:21.471687 1833472 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29752,"bootTime":1763947050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:33:21.471757 1833472 start.go:143] virtualization:  
	I1124 09:33:21.475095 1833472 out.go:179] * [functional-498341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 09:33:21.478006 1833472 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:33:21.478091 1833472 notify.go:221] Checking for updates...
	I1124 09:33:21.483604 1833472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:33:21.486485 1833472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:33:21.489401 1833472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:33:21.492427 1833472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:33:21.495268 1833472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:33:21.498666 1833472 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:33:21.499304 1833472 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:33:21.534539 1833472 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:33:21.534673 1833472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:33:21.601255 1833472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:33:21.590530539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:33:21.601371 1833472 docker.go:319] overlay module found
	I1124 09:33:21.604573 1833472 out.go:179] * Using the docker driver based on existing profile
	I1124 09:33:21.607412 1833472 start.go:309] selected driver: docker
	I1124 09:33:21.607433 1833472 start.go:927] validating driver "docker" against &{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:33:21.607542 1833472 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:33:21.611081 1833472 out.go:203] 
	W1124 09:33:21.613920 1833472 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 09:33:21.616780 1833472 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-498341 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-498341 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-498341 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.321563ms)

                                                
                                                
-- stdout --
	* [functional-498341] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:33:21.274049 1833426 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:33:21.274195 1833426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:33:21.274207 1833426 out.go:374] Setting ErrFile to fd 2...
	I1124 09:33:21.274214 1833426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:33:21.274594 1833426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 09:33:21.274988 1833426 out.go:368] Setting JSON to false
	I1124 09:33:21.275929 1833426 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29752,"bootTime":1763947050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 09:33:21.276002 1833426 start.go:143] virtualization:  
	I1124 09:33:21.279941 1833426 out.go:179] * [functional-498341] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1124 09:33:21.283133 1833426 notify.go:221] Checking for updates...
	I1124 09:33:21.287292 1833426 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:33:21.290257 1833426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:33:21.293137 1833426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 09:33:21.296141 1833426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 09:33:21.298976 1833426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 09:33:21.301780 1833426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:33:21.305487 1833426 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 09:33:21.306090 1833426 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:33:21.335830 1833426 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 09:33:21.335945 1833426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:33:21.400143 1833426 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 09:33:21.390903139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 09:33:21.400247 1833426 docker.go:319] overlay module found
	I1124 09:33:21.403280 1833426 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 09:33:21.406164 1833426 start.go:309] selected driver: docker
	I1124 09:33:21.406187 1833426 start.go:927] validating driver "docker" against &{Name:functional-498341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-498341 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:33:21.406325 1833426 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:33:21.409852 1833426 out.go:203] 
	W1124 09:33:21.412763 1833426 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 09:33:21.415716 1833426 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh -n functional-498341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cp functional-498341:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2522289623/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh -n functional-498341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh -n functional-498341 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1806704/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo cat /etc/test/nested/copy/1806704/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1806704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo cat /etc/ssl/certs/1806704.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1806704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo cat /usr/share/ca-certificates/1806704.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/18067042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo cat /etc/ssl/certs/18067042.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/18067042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo cat /usr/share/ca-certificates/18067042.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-498341 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh "sudo systemctl is-active docker": exit status 1 (288.929143ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh "sudo systemctl is-active containerd": exit status 1 (283.056346ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-498341 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-498341 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-498341 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1829646: os: process already finished
helpers_test.go:525: unable to kill pid 1829460: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-498341 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-498341 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-498341 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c69dbe10-bb39-41c1-b590-6c194c6c57f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c69dbe10-bb39-41c1-b590-6c194c6c57f4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004122339s
I1124 09:23:02.741731 1806704 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-498341 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.117.210 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-498341 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "375.375829ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "74.117433ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "364.412028ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.119265ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdany-port3680630357/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763976788079925110" to /tmp/TestFunctionalparallelMountCmdany-port3680630357/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763976788079925110" to /tmp/TestFunctionalparallelMountCmdany-port3680630357/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763976788079925110" to /tmp/TestFunctionalparallelMountCmdany-port3680630357/001/test-1763976788079925110
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.627694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 09:33:08.429562 1806704 retry.go:31] will retry after 363.986726ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 09:33 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 09:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 09:33 test-1763976788079925110
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh cat /mount-9p/test-1763976788079925110
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-498341 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [76112433-5459-4de8-9316-b1d1756eaff3] Pending
helpers_test.go:352: "busybox-mount" [76112433-5459-4de8-9316-b1d1756eaff3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [76112433-5459-4de8-9316-b1d1756eaff3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [76112433-5459-4de8-9316-b1d1756eaff3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003877661s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-498341 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdany-port3680630357/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdspecific-port1417379423/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.214822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 09:33:16.213396 1806704 retry.go:31] will retry after 722.445513ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdspecific-port1417379423/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh "sudo umount -f /mount-9p": exit status 1 (299.106813ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-498341 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdspecific-port1417379423/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457111760/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457111760/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457111760/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T" /mount1: exit status 1 (588.089637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 09:33:18.588398 1806704 retry.go:31] will retry after 684.755125ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-498341 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457111760/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457111760/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-498341 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1457111760/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 service list: (1.325736957s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 service list -o json: (1.327667747s)
functional_test.go:1504: Took "1.327745386s" to run "out/minikube-linux-arm64 -p functional-498341 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 version -o=json --components: (1.001909431s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-498341 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-498341 image ls --format short --alsologtostderr:
I1124 09:37:34.413278 1836009 out.go:360] Setting OutFile to fd 1 ...
I1124 09:37:34.413477 1836009 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:34.413503 1836009 out.go:374] Setting ErrFile to fd 2...
I1124 09:37:34.413528 1836009 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:34.413814 1836009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 09:37:34.414474 1836009 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:34.414596 1836009 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:34.415142 1836009 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
I1124 09:37:34.432596 1836009 ssh_runner.go:195] Run: systemctl --version
I1124 09:37:34.432657 1836009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
I1124 09:37:34.450458 1836009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
I1124 09:37:34.555662 1836009 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-498341 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ localhost/my-image                      │ functional-498341  │ 9a4a13228e9e5 │ 1.64MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-498341 image ls --format table --alsologtostderr:
I1124 09:37:38.975357 1836479 out.go:360] Setting OutFile to fd 1 ...
I1124 09:37:38.975545 1836479 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:38.975557 1836479 out.go:374] Setting ErrFile to fd 2...
I1124 09:37:38.975563 1836479 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:38.975844 1836479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 09:37:38.976505 1836479 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:38.976663 1836479 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:38.977333 1836479 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
I1124 09:37:38.994379 1836479 ssh_runner.go:195] Run: systemctl --version
I1124 09:37:38.994440 1836479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
I1124 09:37:39.012014 1836479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
I1124 09:37:39.119667 1836479 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-498341 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf79
7763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags
":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s
.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"8c4ee6bcd4ff19b4adf00e99bf483a921004be0ef5fab8c14ea54edb99f2ada3","repoDigests":["docker.io/library/9dcbd43a2bf2513f8f308dec80f38b89a0fe34098a2dd8eb29eb78656bc889ac-tmp@sha256:162c93a91fb551fae2e3159fb395205edb6ac5f73a31ea9c3c3ef06b25f2feb8"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"9a4a13228e9e5469ddcd1907645842a390fd1c909c70aed8114132367e551313","repoDigests":["localhost/my-image@sha256:15df5a3
4351d3aa80eb37ef765c409e35c57796fe23b61510cf2dae926410432"],"repoTags":["localhost/my-image:functional-498341"],"size":"1640791"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25
a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-498341 image ls --format json --alsologtostderr:
I1124 09:37:38.750011 1836441 out.go:360] Setting OutFile to fd 1 ...
I1124 09:37:38.750159 1836441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:38.750182 1836441 out.go:374] Setting ErrFile to fd 2...
I1124 09:37:38.750199 1836441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:38.750484 1836441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 09:37:38.751106 1836441 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:38.751265 1836441 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:38.751813 1836441 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
I1124 09:37:38.769305 1836441 ssh_runner.go:195] Run: systemctl --version
I1124 09:37:38.769368 1836441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
I1124 09:37:38.787025 1836441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
I1124 09:37:38.891784 1836441 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-498341 image ls --format yaml --alsologtostderr:
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-498341 image ls --format yaml --alsologtostderr:
I1124 09:37:34.643666 1836046 out.go:360] Setting OutFile to fd 1 ...
I1124 09:37:34.643848 1836046 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:34.643861 1836046 out.go:374] Setting ErrFile to fd 2...
I1124 09:37:34.643868 1836046 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:34.644189 1836046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 09:37:34.644880 1836046 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:34.645011 1836046 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:34.645569 1836046 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
I1124 09:37:34.664776 1836046 ssh_runner.go:195] Run: systemctl --version
I1124 09:37:34.664842 1836046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
I1124 09:37:34.683094 1836046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
I1124 09:37:34.788050 1836046 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-498341 ssh pgrep buildkitd: exit status 1 (293.474464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr: (3.338350748s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8c4ee6bcd4f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-498341
--> 9a4a13228e9
Successfully tagged localhost/my-image:functional-498341
9a4a13228e9e5469ddcd1907645842a390fd1c909c70aed8114132367e551313
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-498341 image build -t localhost/my-image:functional-498341 testdata/build --alsologtostderr:
I1124 09:37:35.169843 1836145 out.go:360] Setting OutFile to fd 1 ...
I1124 09:37:35.171246 1836145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:35.171267 1836145 out.go:374] Setting ErrFile to fd 2...
I1124 09:37:35.171274 1836145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 09:37:35.171616 1836145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 09:37:35.172333 1836145 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:35.173052 1836145 config.go:182] Loaded profile config "functional-498341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1124 09:37:35.173691 1836145 cli_runner.go:164] Run: docker container inspect functional-498341 --format={{.State.Status}}
I1124 09:37:35.193254 1836145 ssh_runner.go:195] Run: systemctl --version
I1124 09:37:35.193314 1836145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-498341
I1124 09:37:35.215629 1836145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35000 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-498341/id_rsa Username:docker}
I1124 09:37:35.319652 1836145 build_images.go:162] Building image from path: /tmp/build.331773661.tar
I1124 09:37:35.319747 1836145 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 09:37:35.327648 1836145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.331773661.tar
I1124 09:37:35.331300 1836145 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.331773661.tar: stat -c "%s %y" /var/lib/minikube/build/build.331773661.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.331773661.tar': No such file or directory
I1124 09:37:35.331332 1836145 ssh_runner.go:362] scp /tmp/build.331773661.tar --> /var/lib/minikube/build/build.331773661.tar (3072 bytes)
I1124 09:37:35.349775 1836145 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.331773661
I1124 09:37:35.358510 1836145 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.331773661 -xf /var/lib/minikube/build/build.331773661.tar
I1124 09:37:35.366551 1836145 crio.go:315] Building image: /var/lib/minikube/build/build.331773661
I1124 09:37:35.366634 1836145 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-498341 /var/lib/minikube/build/build.331773661 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1124 09:37:38.433331 1836145 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-498341 /var/lib/minikube/build/build.331773661 --cgroup-manager=cgroupfs: (3.066669665s)
I1124 09:37:38.433406 1836145 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.331773661
I1124 09:37:38.441300 1836145 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.331773661.tar
I1124 09:37:38.448652 1836145 build_images.go:218] Built localhost/my-image:functional-498341 from /tmp/build.331773661.tar
I1124 09:37:38.448685 1836145 build_images.go:134] succeeded building to: functional-498341
I1124 09:37:38.448690 1836145 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-498341
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image rm kicbase/echo-server:functional-498341 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-498341 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-498341
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-498341
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-498341
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-1804834/.minikube/files/etc/test/nested/copy/1806704/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 cache add registry.k8s.io/pause:3.1: (1.224631503s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 cache add registry.k8s.io/pause:3.3: (1.190311067s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 cache add registry.k8s.io/pause:latest: (1.151468151s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3259818381/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cache add minikube-local-cache-test:functional-373432
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cache delete minikube-local-cache-test:functional-373432
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-373432
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.787241ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs805039836/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs805039836/001/logs.txt: (1.004905801s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 config get cpus: exit status 14 (87.700566ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 config get cpus: exit status 14 (110.709623ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (208.61351ms)

                                                
                                                
-- stdout --
	* [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:08:13.224945 1868722 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:08:13.225178 1868722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.225194 1868722 out.go:374] Setting ErrFile to fd 2...
	I1124 10:08:13.225201 1868722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.225540 1868722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:08:13.225930 1868722 out.go:368] Setting JSON to false
	I1124 10:08:13.226906 1868722 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31844,"bootTime":1763947050,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:08:13.226974 1868722 start.go:143] virtualization:  
	I1124 10:08:13.230391 1868722 out.go:179] * [functional-373432] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 10:08:13.234191 1868722 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:08:13.234347 1868722 notify.go:221] Checking for updates...
	I1124 10:08:13.240003 1868722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:08:13.242794 1868722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:08:13.246210 1868722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:08:13.250099 1868722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:08:13.252945 1868722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:08:13.256252 1868722 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:08:13.256827 1868722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:08:13.297534 1868722 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:08:13.297647 1868722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:08:13.354748 1868722 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:13.345285592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:08:13.354857 1868722 docker.go:319] overlay module found
	I1124 10:08:13.357882 1868722 out.go:179] * Using the docker driver based on existing profile
	I1124 10:08:13.360657 1868722 start.go:309] selected driver: docker
	I1124 10:08:13.360681 1868722 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:08:13.360787 1868722 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:08:13.364515 1868722 out.go:203] 
	W1124 10:08:13.367537 1868722 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 10:08:13.370478 1868722 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373432 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-373432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (185.196011ms)

                                                
                                                
-- stdout --
	* [functional-373432] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:08:13.033228 1868675 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:08:13.033403 1868675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.033434 1868675 out.go:374] Setting ErrFile to fd 2...
	I1124 10:08:13.033457 1868675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:08:13.033839 1868675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:08:13.034246 1868675 out.go:368] Setting JSON to false
	I1124 10:08:13.035154 1868675 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31843,"bootTime":1763947050,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1124 10:08:13.035255 1868675 start.go:143] virtualization:  
	I1124 10:08:13.038998 1868675 out.go:179] * [functional-373432] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1124 10:08:13.043567 1868675 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 10:08:13.043671 1868675 notify.go:221] Checking for updates...
	I1124 10:08:13.050586 1868675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 10:08:13.053764 1868675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	I1124 10:08:13.056907 1868675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	I1124 10:08:13.060019 1868675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 10:08:13.063259 1868675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 10:08:13.066834 1868675 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1124 10:08:13.067436 1868675 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 10:08:13.089656 1868675 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 10:08:13.089773 1868675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:08:13.146017 1868675 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 10:08:13.136917663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:08:13.146123 1868675 docker.go:319] overlay module found
	I1124 10:08:13.149292 1868675 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 10:08:13.152166 1868675 start.go:309] selected driver: docker
	I1124 10:08:13.152188 1868675 start.go:927] validating driver "docker" against &{Name:functional-373432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-373432 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 10:08:13.152293 1868675 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 10:08:13.155900 1868675 out.go:203] 
	W1124 10:08:13.158792 1868675 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 10:08:13.161651 1868675 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh -n functional-373432 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cp functional-373432:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3998041042/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh -n functional-373432 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh -n functional-373432 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1806704/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo cat /etc/test/nested/copy/1806704/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1806704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo cat /etc/ssl/certs/1806704.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1806704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo cat /usr/share/ca-certificates/1806704.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/18067042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo cat /etc/ssl/certs/18067042.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/18067042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo cat /usr/share/ca-certificates/18067042.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 ssh "sudo systemctl is-active docker": exit status 1 (375.737818ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 ssh "sudo systemctl is-active containerd": exit status 1 (349.034413ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373432 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/etcd:3.5.24-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373432 image ls --format short --alsologtostderr:
I1124 10:08:16.386844 1869447 out.go:360] Setting OutFile to fd 1 ...
I1124 10:08:16.387046 1869447 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:16.387073 1869447 out.go:374] Setting ErrFile to fd 2...
I1124 10:08:16.387091 1869447 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:16.387365 1869447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:08:16.388022 1869447 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:16.388211 1869447 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:16.388770 1869447 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:08:16.405654 1869447 ssh_runner.go:195] Run: systemctl --version
I1124 10:08:16.405719 1869447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:08:16.422582 1869447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
I1124 10:08:16.527753 1869447 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373432 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.5.24-0          │ 1211402d28f58 │ 63.3MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.1               │ 8057e0500773a │ 529kB  │
│ localhost/my-image                      │ functional-373432 │ f5ab37ce8cc1d │ 1.64MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 16378741539f1 │ 49.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 66749159455b3 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ latest            │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ ccd634d9bcc36 │ 84.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.10.1            │ d7b100cd9a77b │ 517kB  │
│ registry.k8s.io/pause                   │ 3.3               │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ latest            │ 71a676dd070f4 │ 1.63MB │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373432 image ls --format table --alsologtostderr:
I1124 10:08:20.833093 1869939 out.go:360] Setting OutFile to fd 1 ...
I1124 10:08:20.833302 1869939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:20.833329 1869939 out.go:374] Setting ErrFile to fd 2...
I1124 10:08:20.833349 1869939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:20.833638 1869939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:08:20.834278 1869939 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:20.834455 1869939 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:20.835040 1869939 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:08:20.851698 1869939 ssh_runner.go:195] Run: systemctl --version
I1124 10:08:20.851751 1869939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:08:20.871082 1869939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
I1124 10:08:20.975702 1869939 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373432 image ls --format json --alsologtostderr:
[{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"c33e7a12e7528f6799bcb0cc882a2a14a70547cfef943a30af72fa7fb939d62f","repoDigests":["docker.io/library/da29e9964e5e719d3b62b21f2bc531533aa1cb9301ee571f2172956cc154a0f2-tmp@sha256:77bd2c2f84508964c2c5d80b1697ae8120ec1a959a8be52903cd65f7428a733c"],"repoTags":[],"size":"1638179"},{"id":"1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca","repoDigests":["registry.k8s.io/etcd@sha256:62cae8d38
d7e1187ef2841ebc55bef1c5a46f21a69675fae8351f92d3a3e9bc6"],"repoTags":["registry.k8s.io/etcd:3.5.24-0"],"size":"63341525"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74105124"},{"id":"66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29035622"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72167568"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c
43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49819792"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"517328"},{"id":"f5ab37ce8cc1d298e0b3c1f61393c5669b5919aa62ee59ab4ce238cb7f873fc3","repoDigests":["localhost/my-image@sha256:79b3817ac12bf9dbebf5993b581f5f1106d1d9a42d2f2e5e3b59bef2cd1ad0e6"],"repoTags":["localhost/my-image:functional-373432"],"size":"1640790"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
"repoDigests":["registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74488375"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84947242"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.i
o/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373432 image ls --format json --alsologtostderr:
I1124 10:08:20.599002 1869897 out.go:360] Setting OutFile to fd 1 ...
I1124 10:08:20.599129 1869897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:20.599140 1869897 out.go:374] Setting ErrFile to fd 2...
I1124 10:08:20.599146 1869897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:20.599392 1869897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:08:20.600000 1869897 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:20.600133 1869897 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:20.600669 1869897 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:08:20.617564 1869897 ssh_runner.go:195] Run: systemctl --version
I1124 10:08:20.617624 1869897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:08:20.634518 1869897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
I1124 10:08:20.735569 1869897 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373432 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29035622"
- id: 1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca
repoDigests:
- registry.k8s.io/etcd@sha256:62cae8d38d7e1187ef2841ebc55bef1c5a46f21a69675fae8351f92d3a3e9bc6
repoTags:
- registry.k8s.io/etcd:3.5.24-0
size: "63341525"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84947242"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72167568"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49819792"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde
repoTags:
- registry.k8s.io/pause:3.10.1
size: "517328"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74488375"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74105124"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373432 image ls --format yaml --alsologtostderr:
I1124 10:08:16.614821 1869485 out.go:360] Setting OutFile to fd 1 ...
I1124 10:08:16.614962 1869485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:16.614973 1869485 out.go:374] Setting ErrFile to fd 2...
I1124 10:08:16.614993 1869485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:16.615302 1869485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:08:16.615935 1869485 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:16.616104 1869485 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:16.616643 1869485 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:08:16.633480 1869485 ssh_runner.go:195] Run: systemctl --version
I1124 10:08:16.633538 1869485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:08:16.649877 1869485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
I1124 10:08:16.751237 1869485 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 ssh pgrep buildkitd: exit status 1 (263.038821ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image build -t localhost/my-image:functional-373432 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-373432 image build -t localhost/my-image:functional-373432 testdata/build --alsologtostderr: (3.267805684s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373432 image build -t localhost/my-image:functional-373432 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c33e7a12e75
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-373432
--> f5ab37ce8cc
Successfully tagged localhost/my-image:functional-373432
f5ab37ce8cc1d298e0b3c1f61393c5669b5919aa62ee59ab4ce238cb7f873fc3
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373432 image build -t localhost/my-image:functional-373432 testdata/build --alsologtostderr:
I1124 10:08:17.094290 1869589 out.go:360] Setting OutFile to fd 1 ...
I1124 10:08:17.094424 1869589 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:17.094462 1869589 out.go:374] Setting ErrFile to fd 2...
I1124 10:08:17.094476 1869589 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 10:08:17.094734 1869589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
I1124 10:08:17.095337 1869589 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:17.095947 1869589 config.go:182] Loaded profile config "functional-373432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1124 10:08:17.096458 1869589 cli_runner.go:164] Run: docker container inspect functional-373432 --format={{.State.Status}}
I1124 10:08:17.113263 1869589 ssh_runner.go:195] Run: systemctl --version
I1124 10:08:17.113320 1869589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373432
I1124 10:08:17.130192 1869589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35005 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/functional-373432/id_rsa Username:docker}
I1124 10:08:17.235677 1869589 build_images.go:162] Building image from path: /tmp/build.3254345802.tar
I1124 10:08:17.235787 1869589 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 10:08:17.243329 1869589 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3254345802.tar
I1124 10:08:17.246801 1869589 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3254345802.tar: stat -c "%s %y" /var/lib/minikube/build/build.3254345802.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3254345802.tar': No such file or directory
I1124 10:08:17.246832 1869589 ssh_runner.go:362] scp /tmp/build.3254345802.tar --> /var/lib/minikube/build/build.3254345802.tar (3072 bytes)
I1124 10:08:17.263393 1869589 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3254345802
I1124 10:08:17.270901 1869589 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3254345802 -xf /var/lib/minikube/build/build.3254345802.tar
I1124 10:08:17.278998 1869589 crio.go:315] Building image: /var/lib/minikube/build/build.3254345802
I1124 10:08:17.279075 1869589 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-373432 /var/lib/minikube/build/build.3254345802 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1124 10:08:20.292346 1869589 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-373432 /var/lib/minikube/build/build.3254345802 --cgroup-manager=cgroupfs: (3.013242144s)
I1124 10:08:20.292420 1869589 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3254345802
I1124 10:08:20.299993 1869589 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3254345802.tar
I1124 10:08:20.307359 1869589 build_images.go:218] Built localhost/my-image:functional-373432 from /tmp/build.3254345802.tar
I1124 10:08:20.307391 1869589 build_images.go:134] succeeded building to: functional-373432
I1124 10:08:20.307397 1869589 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-373432
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image rm kicbase/echo-server:functional-373432 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-373432 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "344.958644ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.322979ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "346.98019ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.762369ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2629282086/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.485633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 10:08:07.359968 1806704 retry.go:31] will retry after 262.47778ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2629282086/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373432 ssh "sudo umount -f /mount-9p": exit status 1 (267.021605ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-373432 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2629282086/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-373432 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-373432 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373432 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1134803463/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-373432
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-373432
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-373432
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 10:10:19.926018 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:36.849771 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.142651 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.149039 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.160431 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.181882 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.223289 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.304700 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.466183 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:53.787632 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:54.429825 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:55.711169 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:10:58.273266 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:11:03.395545 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:11:13.636901 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:11:34.118726 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:12:15.080152 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:12:54.299769 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m16.045200885s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (196.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 kubectl -- rollout status deployment/busybox: (4.190762289s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-88fth -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-pwgl5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-snzk9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-88fth -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-pwgl5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-snzk9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-88fth -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-pwgl5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-snzk9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-88fth -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-88fth -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-pwgl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-pwgl5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-snzk9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 kubectl -- exec busybox-7b57f96db7-snzk9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 node add --alsologtostderr -v 5
E1124 10:13:37.002128 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 node add --alsologtostderr -v 5: (57.922411124s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5: (1.070031599s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-901373 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.102671477s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 status --output json --alsologtostderr -v 5: (1.032906986s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp testdata/cp-test.txt ha-901373:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032335798/001/cp-test_ha-901373.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373:/home/docker/cp-test.txt ha-901373-m02:/home/docker/cp-test_ha-901373_ha-901373-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test_ha-901373_ha-901373-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373:/home/docker/cp-test.txt ha-901373-m03:/home/docker/cp-test_ha-901373_ha-901373-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test_ha-901373_ha-901373-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373:/home/docker/cp-test.txt ha-901373-m04:/home/docker/cp-test_ha-901373_ha-901373-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test_ha-901373_ha-901373-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp testdata/cp-test.txt ha-901373-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032335798/001/cp-test_ha-901373-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m02:/home/docker/cp-test.txt ha-901373:/home/docker/cp-test_ha-901373-m02_ha-901373.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test_ha-901373-m02_ha-901373.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m02:/home/docker/cp-test.txt ha-901373-m03:/home/docker/cp-test_ha-901373-m02_ha-901373-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test_ha-901373-m02_ha-901373-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m02:/home/docker/cp-test.txt ha-901373-m04:/home/docker/cp-test_ha-901373-m02_ha-901373-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test_ha-901373-m02_ha-901373-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp testdata/cp-test.txt ha-901373-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032335798/001/cp-test_ha-901373-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m03:/home/docker/cp-test.txt ha-901373:/home/docker/cp-test_ha-901373-m03_ha-901373.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test_ha-901373-m03_ha-901373.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m03:/home/docker/cp-test.txt ha-901373-m02:/home/docker/cp-test_ha-901373-m03_ha-901373-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test_ha-901373-m03_ha-901373-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m03:/home/docker/cp-test.txt ha-901373-m04:/home/docker/cp-test_ha-901373-m03_ha-901373-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test_ha-901373-m03_ha-901373-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp testdata/cp-test.txt ha-901373-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032335798/001/cp-test_ha-901373-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m04:/home/docker/cp-test.txt ha-901373:/home/docker/cp-test_ha-901373-m04_ha-901373.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373 "sudo cat /home/docker/cp-test_ha-901373-m04_ha-901373.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m04:/home/docker/cp-test.txt ha-901373-m02:/home/docker/cp-test_ha-901373-m04_ha-901373-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m02 "sudo cat /home/docker/cp-test_ha-901373-m04_ha-901373-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 cp ha-901373-m04:/home/docker/cp-test.txt ha-901373-m03:/home/docker/cp-test_ha-901373-m04_ha-901373-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 ssh -n ha-901373-m03 "sudo cat /home/docker/cp-test_ha-901373-m04_ha-901373-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 node stop m02 --alsologtostderr -v 5: (12.066582322s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5: exit status 7 (834.578242ms)

                                                
                                                
-- stdout --
	ha-901373
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901373-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901373-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901373-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:14:55.147845 1885544 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:14:55.148079 1885544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:14:55.148088 1885544 out.go:374] Setting ErrFile to fd 2...
	I1124 10:14:55.148094 1885544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:14:55.148377 1885544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:14:55.148601 1885544 out.go:368] Setting JSON to false
	I1124 10:14:55.148637 1885544 mustload.go:66] Loading cluster: ha-901373
	I1124 10:14:55.148703 1885544 notify.go:221] Checking for updates...
	I1124 10:14:55.149686 1885544 config.go:182] Loaded profile config "ha-901373": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:14:55.149707 1885544 status.go:174] checking status of ha-901373 ...
	I1124 10:14:55.150573 1885544 cli_runner.go:164] Run: docker container inspect ha-901373 --format={{.State.Status}}
	I1124 10:14:55.174312 1885544 status.go:371] ha-901373 host status = "Running" (err=<nil>)
	I1124 10:14:55.174340 1885544 host.go:66] Checking if "ha-901373" exists ...
	I1124 10:14:55.174649 1885544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901373
	I1124 10:14:55.205318 1885544 host.go:66] Checking if "ha-901373" exists ...
	I1124 10:14:55.205716 1885544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:14:55.205769 1885544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901373
	I1124 10:14:55.229334 1885544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35010 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/ha-901373/id_rsa Username:docker}
	I1124 10:14:55.342726 1885544 ssh_runner.go:195] Run: systemctl --version
	I1124 10:14:55.350578 1885544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:14:55.363101 1885544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:14:55.431842 1885544 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-24 10:14:55.421727867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:14:55.432380 1885544 kubeconfig.go:125] found "ha-901373" server: "https://192.168.49.254:8443"
	I1124 10:14:55.432416 1885544 api_server.go:166] Checking apiserver status ...
	I1124 10:14:55.432464 1885544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:14:55.444239 1885544 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	I1124 10:14:55.453176 1885544 api_server.go:182] apiserver freezer: "5:freezer:/docker/d3d6928ea4042bbb921fb259f36ce27eea38b9e1471c14b1e883c8ae1aea6657/crio/crio-caef4a73014a0526b72961e344200042b27d8e5bf4ad88f7e2a2148efafad29f"
	I1124 10:14:55.453252 1885544 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d3d6928ea4042bbb921fb259f36ce27eea38b9e1471c14b1e883c8ae1aea6657/crio/crio-caef4a73014a0526b72961e344200042b27d8e5bf4ad88f7e2a2148efafad29f/freezer.state
	I1124 10:14:55.461175 1885544 api_server.go:204] freezer state: "THAWED"
	I1124 10:14:55.461201 1885544 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 10:14:55.471940 1885544 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 10:14:55.471970 1885544 status.go:463] ha-901373 apiserver status = Running (err=<nil>)
	I1124 10:14:55.471981 1885544 status.go:176] ha-901373 status: &{Name:ha-901373 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:14:55.471997 1885544 status.go:174] checking status of ha-901373-m02 ...
	I1124 10:14:55.472308 1885544 cli_runner.go:164] Run: docker container inspect ha-901373-m02 --format={{.State.Status}}
	I1124 10:14:55.489437 1885544 status.go:371] ha-901373-m02 host status = "Stopped" (err=<nil>)
	I1124 10:14:55.489466 1885544 status.go:384] host is not running, skipping remaining checks
	I1124 10:14:55.489474 1885544 status.go:176] ha-901373-m02 status: &{Name:ha-901373-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:14:55.489494 1885544 status.go:174] checking status of ha-901373-m03 ...
	I1124 10:14:55.489800 1885544 cli_runner.go:164] Run: docker container inspect ha-901373-m03 --format={{.State.Status}}
	I1124 10:14:55.508503 1885544 status.go:371] ha-901373-m03 host status = "Running" (err=<nil>)
	I1124 10:14:55.508536 1885544 host.go:66] Checking if "ha-901373-m03" exists ...
	I1124 10:14:55.508857 1885544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901373-m03
	I1124 10:14:55.529144 1885544 host.go:66] Checking if "ha-901373-m03" exists ...
	I1124 10:14:55.529496 1885544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:14:55.529547 1885544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901373-m03
	I1124 10:14:55.550912 1885544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35020 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/ha-901373-m03/id_rsa Username:docker}
	I1124 10:14:55.662651 1885544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:14:55.677683 1885544 kubeconfig.go:125] found "ha-901373" server: "https://192.168.49.254:8443"
	I1124 10:14:55.677725 1885544 api_server.go:166] Checking apiserver status ...
	I1124 10:14:55.677779 1885544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:14:55.692288 1885544 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1189/cgroup
	I1124 10:14:55.700943 1885544 api_server.go:182] apiserver freezer: "5:freezer:/docker/e20bec77ca6d05d692226ba473c90053786d0408599e6a7db0528308ce0ae46a/crio/crio-cd9a576e9f1887b6f50013d2bebfa38fb16f1a955ea9c38951f5b68bbadb5ea7"
	I1124 10:14:55.701059 1885544 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e20bec77ca6d05d692226ba473c90053786d0408599e6a7db0528308ce0ae46a/crio/crio-cd9a576e9f1887b6f50013d2bebfa38fb16f1a955ea9c38951f5b68bbadb5ea7/freezer.state
	I1124 10:14:55.709788 1885544 api_server.go:204] freezer state: "THAWED"
	I1124 10:14:55.709868 1885544 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 10:14:55.718914 1885544 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 10:14:55.718947 1885544 status.go:463] ha-901373-m03 apiserver status = Running (err=<nil>)
	I1124 10:14:55.718957 1885544 status.go:176] ha-901373-m03 status: &{Name:ha-901373-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:14:55.718974 1885544 status.go:174] checking status of ha-901373-m04 ...
	I1124 10:14:55.719285 1885544 cli_runner.go:164] Run: docker container inspect ha-901373-m04 --format={{.State.Status}}
	I1124 10:14:55.749296 1885544 status.go:371] ha-901373-m04 host status = "Running" (err=<nil>)
	I1124 10:14:55.749335 1885544 host.go:66] Checking if "ha-901373-m04" exists ...
	I1124 10:14:55.749741 1885544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901373-m04
	I1124 10:14:55.773171 1885544 host.go:66] Checking if "ha-901373-m04" exists ...
	I1124 10:14:55.773522 1885544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:14:55.773570 1885544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901373-m04
	I1124 10:14:55.791637 1885544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35025 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/ha-901373-m04/id_rsa Username:docker}
	I1124 10:14:55.898957 1885544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:14:55.912774 1885544 status.go:176] ha-901373-m04 status: &{Name:ha-901373-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 node start m02 --alsologtostderr -v 5: (19.304700978s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5: (1.21133481s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.531111727s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 stop --alsologtostderr -v 5
E1124 10:15:36.849481 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:15:53.147389 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 stop --alsologtostderr -v 5: (37.356178787s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 start --wait true --alsologtostderr -v 5
E1124 10:15:57.368351 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:16:20.845261 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 start --wait true --alsologtostderr -v 5: (1m25.447302061s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 node delete m03 --alsologtostderr -v 5: (10.974939149s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 stop --alsologtostderr -v 5
E1124 10:17:54.299556 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 stop --alsologtostderr -v 5: (36.268223635s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5: exit status 7 (121.66098ms)

                                                
                                                
-- stdout --
	ha-901373
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901373-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901373-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:18:10.972630 1897690 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:18:10.972835 1897690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:18:10.972861 1897690 out.go:374] Setting ErrFile to fd 2...
	I1124 10:18:10.972881 1897690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:18:10.973199 1897690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:18:10.973421 1897690 out.go:368] Setting JSON to false
	I1124 10:18:10.973477 1897690 mustload.go:66] Loading cluster: ha-901373
	I1124 10:18:10.973552 1897690 notify.go:221] Checking for updates...
	I1124 10:18:10.974822 1897690 config.go:182] Loaded profile config "ha-901373": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:18:10.974866 1897690 status.go:174] checking status of ha-901373 ...
	I1124 10:18:10.975582 1897690 cli_runner.go:164] Run: docker container inspect ha-901373 --format={{.State.Status}}
	I1124 10:18:10.993422 1897690 status.go:371] ha-901373 host status = "Stopped" (err=<nil>)
	I1124 10:18:10.993444 1897690 status.go:384] host is not running, skipping remaining checks
	I1124 10:18:10.993456 1897690 status.go:176] ha-901373 status: &{Name:ha-901373 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:18:10.993482 1897690 status.go:174] checking status of ha-901373-m02 ...
	I1124 10:18:10.993897 1897690 cli_runner.go:164] Run: docker container inspect ha-901373-m02 --format={{.State.Status}}
	I1124 10:18:11.024093 1897690 status.go:371] ha-901373-m02 host status = "Stopped" (err=<nil>)
	I1124 10:18:11.024115 1897690 status.go:384] host is not running, skipping remaining checks
	I1124 10:18:11.024121 1897690 status.go:176] ha-901373-m02 status: &{Name:ha-901373-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:18:11.024139 1897690 status.go:174] checking status of ha-901373-m04 ...
	I1124 10:18:11.024440 1897690 cli_runner.go:164] Run: docker container inspect ha-901373-m04 --format={{.State.Status}}
	I1124 10:18:11.042121 1897690 status.go:371] ha-901373-m04 host status = "Stopped" (err=<nil>)
	I1124 10:18:11.042154 1897690 status.go:384] host is not running, skipping remaining checks
	I1124 10:18:11.042169 1897690 status.go:176] ha-901373-m04 status: &{Name:ha-901373-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m23.046176412s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 node add --control-plane --alsologtostderr -v 5
E1124 10:20:36.849284 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:20:53.142667 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 node add --control-plane --alsologtostderr -v 5: (1m24.061351133s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-901373 status --alsologtostderr -v 5: (1.393046046s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.095745069s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-605923 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-605923 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.335586265s)
--- PASS: TestJSONOutput/start/Command (81.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-605923 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-605923 --output=json --user=testUser: (5.80928074s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-239969 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-239969 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.555004ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b262cff8-95de-4285-a90f-a19d05782d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-239969] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"54404a00-7919-4737-a966-6b42fda274ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21978"}}
	{"specversion":"1.0","id":"360a38f3-26d4-48b1-9400-58e8b65101cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e3b6222-983d-4456-a848-85548a2a51fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig"}}
	{"specversion":"1.0","id":"b44db2c6-97aa-4987-8f4d-4957e08533a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube"}}
	{"specversion":"1.0","id":"a0a88260-d307-4acb-91ea-36e5fbeefbca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"713f2f43-f432-4731-b778-d65ada6a2a22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c4dfd61b-c121-4ef4-b2c2-fa86a95bde77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-239969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-239969
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-026275 --network=
E1124 10:22:54.299928 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-026275 --network=: (37.071874098s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-026275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-026275
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-026275: (2.245016856s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-144182 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-144182 --network=bridge: (32.141234444s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-144182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-144182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-144182: (2.068645509s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.24s)

                                                
                                    
x
+
TestKicExistingNetwork (38.86s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 10:23:59.559190 1806704 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 10:23:59.575976 1806704 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 10:23:59.576946 1806704 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 10:23:59.576984 1806704 cli_runner.go:164] Run: docker network inspect existing-network
W1124 10:23:59.593136 1806704 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 10:23:59.593169 1806704 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 10:23:59.593185 1806704 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 10:23:59.593321 1806704 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 10:23:59.610810 1806704 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b39f8e694b2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:c3:8d:8c:34:1f} reservation:<nil>}
I1124 10:23:59.611149 1806704 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000349eb0}
I1124 10:23:59.611180 1806704 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 10:23:59.611239 1806704 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 10:23:59.676184 1806704 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-208210 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-208210 --network=existing-network: (36.570096204s)
helpers_test.go:175: Cleaning up "existing-network-208210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-208210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-208210: (2.141960757s)
I1124 10:24:38.404956 1806704 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.86s)

                                                
                                    
x
+
TestKicCustomSubnet (37.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-292856 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-292856 --subnet=192.168.60.0/24: (35.646368662s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-292856 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-292856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-292856
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-292856: (2.218542861s)
--- PASS: TestKicCustomSubnet (37.89s)

                                                
                                    
x
+
TestKicStaticIP (37.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-309153 --static-ip=192.168.200.200
E1124 10:25:36.850232 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-309153 --static-ip=192.168.200.200: (35.575553735s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-309153 ip
helpers_test.go:175: Cleaning up "static-ip-309153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-309153
E1124 10:25:53.142382 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-309153: (2.221340577s)
--- PASS: TestKicStaticIP (37.95s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-506901 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-506901 --driver=docker  --container-runtime=crio: (36.386485661s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-509543 --driver=docker  --container-runtime=crio
E1124 10:26:59.929828 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-509543 --driver=docker  --container-runtime=crio: (32.852175769s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-506901
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-509543
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-509543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-509543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-509543: (2.203272058s)
helpers_test.go:175: Cleaning up "first-506901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-506901
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-506901: (2.039585801s)
--- PASS: TestMinikubeProfile (74.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-556531 --memory=3072 --mount-string /tmp/TestMountStartserial1809960323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1124 10:27:16.207579 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-556531 --memory=3072 --mount-string /tmp/TestMountStartserial1809960323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.730925756s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-556531 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-558336 --memory=3072 --mount-string /tmp/TestMountStartserial1809960323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-558336 --memory=3072 --mount-string /tmp/TestMountStartserial1809960323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.508314649s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-558336 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-556531 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-556531 --alsologtostderr -v=5: (1.696573259s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-558336 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-558336
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-558336: (1.283219785s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-558336
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-558336: (6.936584976s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-558336 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-278867 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 10:27:54.299960 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-278867 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.184041934s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-278867 -- rollout status deployment/busybox: (4.562125836s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-7kglx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-qdbz7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-7kglx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-qdbz7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-7kglx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-qdbz7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-7kglx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-7kglx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-qdbz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-278867 -- exec busybox-7b57f96db7-qdbz7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-278867 -v=5 --alsologtostderr
E1124 10:30:36.849831 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:30:53.142889 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-278867 -v=5 --alsologtostderr: (57.030229842s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-278867 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp testdata/cp-test.txt multinode-278867:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3406622063/001/cp-test_multinode-278867.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867:/home/docker/cp-test.txt multinode-278867-m02:/home/docker/cp-test_multinode-278867_multinode-278867-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m02 "sudo cat /home/docker/cp-test_multinode-278867_multinode-278867-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867:/home/docker/cp-test.txt multinode-278867-m03:/home/docker/cp-test_multinode-278867_multinode-278867-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m03 "sudo cat /home/docker/cp-test_multinode-278867_multinode-278867-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp testdata/cp-test.txt multinode-278867-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3406622063/001/cp-test_multinode-278867-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867-m02:/home/docker/cp-test.txt multinode-278867:/home/docker/cp-test_multinode-278867-m02_multinode-278867.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867 "sudo cat /home/docker/cp-test_multinode-278867-m02_multinode-278867.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867-m02:/home/docker/cp-test.txt multinode-278867-m03:/home/docker/cp-test_multinode-278867-m02_multinode-278867-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m03 "sudo cat /home/docker/cp-test_multinode-278867-m02_multinode-278867-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp testdata/cp-test.txt multinode-278867-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3406622063/001/cp-test_multinode-278867-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867-m03:/home/docker/cp-test.txt multinode-278867:/home/docker/cp-test_multinode-278867-m03_multinode-278867.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867 "sudo cat /home/docker/cp-test_multinode-278867-m03_multinode-278867.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 cp multinode-278867-m03:/home/docker/cp-test.txt multinode-278867-m02:/home/docker/cp-test_multinode-278867-m03_multinode-278867-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 ssh -n multinode-278867-m02 "sudo cat /home/docker/cp-test_multinode-278867-m03_multinode-278867-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-278867 node stop m03: (1.304487881s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-278867 status: exit status 7 (547.350942ms)

                                                
                                                
-- stdout --
	multinode-278867
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-278867-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-278867-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr: exit status 7 (536.316068ms)

                                                
                                                
-- stdout --
	multinode-278867
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-278867-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-278867-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:31:18.925578 1948414 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:31:18.925779 1948414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:31:18.925815 1948414 out.go:374] Setting ErrFile to fd 2...
	I1124 10:31:18.925836 1948414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:31:18.926171 1948414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:31:18.926405 1948414 out.go:368] Setting JSON to false
	I1124 10:31:18.926473 1948414 mustload.go:66] Loading cluster: multinode-278867
	I1124 10:31:18.926602 1948414 notify.go:221] Checking for updates...
	I1124 10:31:18.927721 1948414 config.go:182] Loaded profile config "multinode-278867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:31:18.927779 1948414 status.go:174] checking status of multinode-278867 ...
	I1124 10:31:18.928364 1948414 cli_runner.go:164] Run: docker container inspect multinode-278867 --format={{.State.Status}}
	I1124 10:31:18.948468 1948414 status.go:371] multinode-278867 host status = "Running" (err=<nil>)
	I1124 10:31:18.948493 1948414 host.go:66] Checking if "multinode-278867" exists ...
	I1124 10:31:18.948815 1948414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-278867
	I1124 10:31:18.968101 1948414 host.go:66] Checking if "multinode-278867" exists ...
	I1124 10:31:18.968399 1948414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:31:18.968456 1948414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-278867
	I1124 10:31:18.992925 1948414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35130 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/multinode-278867/id_rsa Username:docker}
	I1124 10:31:19.099134 1948414 ssh_runner.go:195] Run: systemctl --version
	I1124 10:31:19.105947 1948414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:31:19.119659 1948414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 10:31:19.178296 1948414 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 10:31:19.168171603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 10:31:19.178831 1948414 kubeconfig.go:125] found "multinode-278867" server: "https://192.168.67.2:8443"
	I1124 10:31:19.178875 1948414 api_server.go:166] Checking apiserver status ...
	I1124 10:31:19.178923 1948414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 10:31:19.190574 1948414 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	I1124 10:31:19.198909 1948414 api_server.go:182] apiserver freezer: "5:freezer:/docker/3e541d9339a98230ad018b1e3cc0210caf846bbc16450b6385842cea42603c15/crio/crio-010bf046bd6759496024a76a55e8b034d96c3e776abf18eae566d5975c0d9ccc"
	I1124 10:31:19.198979 1948414 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e541d9339a98230ad018b1e3cc0210caf846bbc16450b6385842cea42603c15/crio/crio-010bf046bd6759496024a76a55e8b034d96c3e776abf18eae566d5975c0d9ccc/freezer.state
	I1124 10:31:19.206691 1948414 api_server.go:204] freezer state: "THAWED"
	I1124 10:31:19.206720 1948414 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 10:31:19.216103 1948414 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 10:31:19.216149 1948414 status.go:463] multinode-278867 apiserver status = Running (err=<nil>)
	I1124 10:31:19.216160 1948414 status.go:176] multinode-278867 status: &{Name:multinode-278867 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:31:19.216182 1948414 status.go:174] checking status of multinode-278867-m02 ...
	I1124 10:31:19.216516 1948414 cli_runner.go:164] Run: docker container inspect multinode-278867-m02 --format={{.State.Status}}
	I1124 10:31:19.233533 1948414 status.go:371] multinode-278867-m02 host status = "Running" (err=<nil>)
	I1124 10:31:19.233576 1948414 host.go:66] Checking if "multinode-278867-m02" exists ...
	I1124 10:31:19.233879 1948414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-278867-m02
	I1124 10:31:19.251525 1948414 host.go:66] Checking if "multinode-278867-m02" exists ...
	I1124 10:31:19.251846 1948414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 10:31:19.251885 1948414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-278867-m02
	I1124 10:31:19.268989 1948414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35135 SSHKeyPath:/home/jenkins/minikube-integration/21978-1804834/.minikube/machines/multinode-278867-m02/id_rsa Username:docker}
	I1124 10:31:19.374347 1948414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 10:31:19.387362 1948414 status.go:176] multinode-278867-m02 status: &{Name:multinode-278867-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:31:19.387398 1948414 status.go:174] checking status of multinode-278867-m03 ...
	I1124 10:31:19.387752 1948414 cli_runner.go:164] Run: docker container inspect multinode-278867-m03 --format={{.State.Status}}
	I1124 10:31:19.405579 1948414 status.go:371] multinode-278867-m03 host status = "Stopped" (err=<nil>)
	I1124 10:31:19.405603 1948414 status.go:384] host is not running, skipping remaining checks
	I1124 10:31:19.405610 1948414 status.go:176] multinode-278867-m03 status: &{Name:multinode-278867-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-278867 node start m03 -v=5 --alsologtostderr: (7.434771365s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-278867
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-278867
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-278867: (25.118699291s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-278867 --wait=true -v=5 --alsologtostderr
E1124 10:32:37.370729 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-278867 --wait=true -v=5 --alsologtostderr: (53.653955516s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-278867
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-278867 node delete m03: (5.025787794s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 stop
E1124 10:32:54.299877 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-278867 stop: (23.799527731s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-278867 status: exit status 7 (97.4387ms)

                                                
                                                
-- stdout --
	multinode-278867
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-278867-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr: exit status 7 (99.632449ms)

                                                
                                                
-- stdout --
	multinode-278867
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-278867-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 10:33:16.200035 1956258 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:33:16.200228 1956258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:33:16.200265 1956258 out.go:374] Setting ErrFile to fd 2...
	I1124 10:33:16.200285 1956258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:33:16.200599 1956258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:33:16.200833 1956258 out.go:368] Setting JSON to false
	I1124 10:33:16.200892 1956258 mustload.go:66] Loading cluster: multinode-278867
	I1124 10:33:16.200970 1956258 notify.go:221] Checking for updates...
	I1124 10:33:16.202277 1956258 config.go:182] Loaded profile config "multinode-278867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:33:16.202466 1956258 status.go:174] checking status of multinode-278867 ...
	I1124 10:33:16.203062 1956258 cli_runner.go:164] Run: docker container inspect multinode-278867 --format={{.State.Status}}
	I1124 10:33:16.222132 1956258 status.go:371] multinode-278867 host status = "Stopped" (err=<nil>)
	I1124 10:33:16.222152 1956258 status.go:384] host is not running, skipping remaining checks
	I1124 10:33:16.222159 1956258 status.go:176] multinode-278867 status: &{Name:multinode-278867 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 10:33:16.222183 1956258 status.go:174] checking status of multinode-278867-m02 ...
	I1124 10:33:16.222492 1956258 cli_runner.go:164] Run: docker container inspect multinode-278867-m02 --format={{.State.Status}}
	I1124 10:33:16.252362 1956258 status.go:371] multinode-278867-m02 host status = "Stopped" (err=<nil>)
	I1124 10:33:16.252385 1956258 status.go:384] host is not running, skipping remaining checks
	I1124 10:33:16.252392 1956258 status.go:176] multinode-278867-m02 status: &{Name:multinode-278867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-278867 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-278867 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (58.483494352s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-278867 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-278867
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-278867-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-278867-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.36087ms)

                                                
                                                
-- stdout --
	* [multinode-278867-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-278867-m02' is duplicated with machine name 'multinode-278867-m02' in profile 'multinode-278867'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-278867-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-278867-m03 --driver=docker  --container-runtime=crio: (34.938576588s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-278867
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-278867: exit status 80 (347.110068ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-278867 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-278867-m03 already exists in multinode-278867-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-278867-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-278867-m03: (2.105268711s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.54s)

                                                
                                    
x
+
TestPreload (133.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-778590 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1124 10:35:36.849246 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:35:53.142877 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-778590 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.748255089s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-778590 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-778590 image pull gcr.io/k8s-minikube/busybox: (2.201569978s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-778590
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-778590: (6.012744289s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-778590 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-778590 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (58.340832887s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-778590 image list
helpers_test.go:175: Cleaning up "test-preload-778590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-778590
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-778590: (2.456953356s)
--- PASS: TestPreload (133.01s)

                                                
                                    
x
+
TestScheduledStopUnix (108.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-146074 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-146074 --memory=3072 --driver=docker  --container-runtime=crio: (32.163583764s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-146074 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 10:37:42.681341 1970392 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:37:42.681543 1970392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:37:42.681571 1970392 out.go:374] Setting ErrFile to fd 2...
	I1124 10:37:42.681591 1970392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:37:42.681886 1970392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:37:42.682222 1970392 out.go:368] Setting JSON to false
	I1124 10:37:42.682392 1970392 mustload.go:66] Loading cluster: scheduled-stop-146074
	I1124 10:37:42.682804 1970392 config.go:182] Loaded profile config "scheduled-stop-146074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:37:42.682917 1970392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/config.json ...
	I1124 10:37:42.683162 1970392 mustload.go:66] Loading cluster: scheduled-stop-146074
	I1124 10:37:42.683332 1970392 config.go:182] Loaded profile config "scheduled-stop-146074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-146074 -n scheduled-stop-146074
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-146074 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 10:37:43.136659 1970478 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:37:43.136838 1970478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:37:43.136865 1970478 out.go:374] Setting ErrFile to fd 2...
	I1124 10:37:43.136885 1970478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:37:43.137375 1970478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:37:43.138046 1970478 out.go:368] Setting JSON to false
	I1124 10:37:43.138990 1970478 daemonize_unix.go:73] killing process 1970408 as it is an old scheduled stop
	I1124 10:37:43.139111 1970478 mustload.go:66] Loading cluster: scheduled-stop-146074
	I1124 10:37:43.139643 1970478 config.go:182] Loaded profile config "scheduled-stop-146074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:37:43.139814 1970478 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/config.json ...
	I1124 10:37:43.140107 1970478 mustload.go:66] Loading cluster: scheduled-stop-146074
	I1124 10:37:43.140293 1970478 config.go:182] Loaded profile config "scheduled-stop-146074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 10:37:43.148174 1806704 retry.go:31] will retry after 126.293µs: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.149335 1806704 retry.go:31] will retry after 154.326µs: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.150479 1806704 retry.go:31] will retry after 119.357µs: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.151624 1806704 retry.go:31] will retry after 478.247µs: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.152722 1806704 retry.go:31] will retry after 320.529µs: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.153930 1806704 retry.go:31] will retry after 381.575µs: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.155077 1806704 retry.go:31] will retry after 1.688602ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.157559 1806704 retry.go:31] will retry after 1.073917ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.158710 1806704 retry.go:31] will retry after 2.682237ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.161924 1806704 retry.go:31] will retry after 2.235949ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.165259 1806704 retry.go:31] will retry after 8.482733ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.174797 1806704 retry.go:31] will retry after 11.704384ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.187102 1806704 retry.go:31] will retry after 11.744006ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.199449 1806704 retry.go:31] will retry after 10.630923ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.211205 1806704 retry.go:31] will retry after 18.170939ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
I1124 10:37:43.230658 1806704 retry.go:31] will retry after 32.50212ms: open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-146074 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1124 10:37:54.299503 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-146074 -n scheduled-stop-146074
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-146074
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-146074 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 10:38:09.076641 1970844 out.go:360] Setting OutFile to fd 1 ...
	I1124 10:38:09.076831 1970844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:38:09.076858 1970844 out.go:374] Setting ErrFile to fd 2...
	I1124 10:38:09.076878 1970844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 10:38:09.077180 1970844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-1804834/.minikube/bin
	I1124 10:38:09.077475 1970844 out.go:368] Setting JSON to false
	I1124 10:38:09.077615 1970844 mustload.go:66] Loading cluster: scheduled-stop-146074
	I1124 10:38:09.077998 1970844 config.go:182] Loaded profile config "scheduled-stop-146074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1124 10:38:09.078090 1970844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/scheduled-stop-146074/config.json ...
	I1124 10:38:09.078348 1970844 mustload.go:66] Loading cluster: scheduled-stop-146074
	I1124 10:38:09.078530 1970844 config.go:182] Loaded profile config "scheduled-stop-146074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-146074
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-146074: exit status 7 (72.231585ms)

                                                
                                                
-- stdout --
	scheduled-stop-146074
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-146074 -n scheduled-stop-146074
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-146074 -n scheduled-stop-146074: exit status 7 (65.876991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-146074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-146074
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-146074: (4.988357494s)
--- PASS: TestScheduledStopUnix (108.74s)

                                                
                                    
x
+
TestInsufficientStorage (12.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-682966 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-682966 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.118683739s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d4afcac2-95bf-4cd8-b011-e4a1357d4e30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-682966] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"537e535d-cd5e-4d68-99e8-399aa0936c6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21978"}}
	{"specversion":"1.0","id":"955a9675-3167-4d9a-9329-8bcb37d9181e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d29b438f-58d9-4220-b556-48b9fb013901","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig"}}
	{"specversion":"1.0","id":"85f795a0-6a0f-46ed-99e0-19c939a5e826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube"}}
	{"specversion":"1.0","id":"bcb9cec9-9e6d-4b3a-af96-135fbf536583","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6c84d223-ceca-4f12-a068-b80bfec399c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93c63180-1d27-40e1-9833-8a94930e8303","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"345cbf82-3302-41bf-b0d7-64b98f18fa39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e5ca9747-af1c-48d1-8d81-ac87d3e7587b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2baa18aa-911c-4d1f-a734-4f81ffbccde6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9df5d349-3e09-4e76-9c26-bf1fb678905a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-682966\" primary control-plane node in \"insufficient-storage-682966\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"162335ae-bae0-4de4-9148-f805c7d78b30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f182787e-c5c1-4b45-a1cb-7d1c948070c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d8fcd7f-7e70-4ebe-9e61-bce69183f0ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-682966 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-682966 --output=json --layout=cluster: exit status 7 (305.216236ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-682966","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-682966","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 10:39:09.619116 1972551 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-682966" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-682966 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-682966 --output=json --layout=cluster: exit status 7 (317.749871ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-682966","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-682966","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 10:39:09.936736 1972615 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-682966" does not appear in /home/jenkins/minikube-integration/21978-1804834/kubeconfig
	E1124 10:39:09.947110 1972615 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/insufficient-storage-682966/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-682966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-682966
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-682966: (1.957772627s)
--- PASS: TestInsufficientStorage (12.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2655590509 start -p running-upgrade-832076 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2655590509 start -p running-upgrade-832076 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.4836351s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-832076 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 10:43:39.933535 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-832076 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.23781116s)
helpers_test.go:175: Cleaning up "running-upgrade-832076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-832076
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-832076: (2.006376351s)
--- PASS: TestRunningBinaryUpgrade (52.49s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2929122821 start -p missing-upgrade-114074 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2929122821 start -p missing-upgrade-114074 --memory=3072 --driver=docker  --container-runtime=crio: (1m6.192715278s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-114074
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-114074
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-114074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 10:40:36.850279 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 10:40:53.142849 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-114074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.253667653s)
helpers_test.go:175: Cleaning up "missing-upgrade-114074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-114074
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-114074: (2.024771761s)
--- PASS: TestMissingContainerUpgrade (117.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-538948 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-538948 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (113.864636ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-538948] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-1804834/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-1804834/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-538948 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-538948 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.439771575s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-538948 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (98.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m35.436961556s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-538948 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-538948 status -o json: exit status 2 (413.853616ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-538948","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-538948
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-538948: (2.414484979s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (98.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-538948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.077879737s)
--- PASS: TestNoKubernetes/serial/Start (9.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21978-1804834/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-538948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-538948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.351269ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-538948
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-538948: (1.545706202s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-538948 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-538948 --driver=docker  --container-runtime=crio: (7.85089091s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-538948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-538948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (363.019112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (63.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.521124741 start -p stopped-upgrade-661807 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.521124741 start -p stopped-upgrade-661807 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.613559086s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.521124741 -p stopped-upgrade-661807 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.521124741 -p stopped-upgrade-661807 stop: (1.24608257s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-661807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 10:42:54.299639 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-498341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-661807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.234786984s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (63.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-661807
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-661807: (1.208031046s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (87.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-245240 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1124 10:43:56.209266 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/functional-373432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-245240 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m27.017507387s)
--- PASS: TestPause/serial/Start (87.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-245240 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 10:45:36.849252 1806704 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-1804834/.minikube/profiles/addons-048116/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-245240 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.924022871s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.95s)

                                                
                                    

Test skip (35/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.15
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1124 09:12:43.847946 1806704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1124 09:12:43.943438 1806704 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
W1124 09:12:43.997903 1806704 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-417875 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-417875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-417875
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard